Times Wonderful News is the latest news and gossip from all the wonderful places in the world. Get your daily dose of what made you happy today.
Author:lisa nichols
Passionate about the power of words and their ability to inform, inspire, and ignite change, lisa Nichols is an accomplished article writer with a flair for crafting engaging and informative content. With a deep curiosity for various subjects and a dedication to thorough research, lisa Nichols brings a unique blend of creativity and accuracy to every piece
Scientists have long accepted the existence of animal culture, be that tool use in New Caledonian crows, or Japanese macaques washing sweet potatoes.
Read the paper here: Bumblebees socially learn behaviour too complex to innovate alone
But one thing thought to distinguish human culture is our ability to do things too complex to work out alone — no one could have split the atom or traveled into space without relying on the years of iterative advances that came first.
But now, a team of researchers think they’ve observed this phenomenon for the first time outside of humans – in bumblebees.
Tenstorrent, the firm led by legendary chip architect Jim Keller, the mastermind behind AMD‘s Zen architecture and Tesla’s original self-driving chip, has launched its first hardware. Grayskull is a RISC-V alternative to GPUs that is designed to be easier to program and scale, and reportedly excels at handling run-time sparsity and conditional computation.
Off the back of this, Tenstorrent has also unveiled its Grayskull-powered DevKits – the standard Grayskull e75 and the more powerful Grayskull e150. Both are inference-only hardware designed for AI development, and come with TT-Buda and TT-Metalium software. The former is for running models right away, while the latter is for users who want to customize their models or write new ones.
The Santa Clara-based tech firm’s milestone launch comes hot on the heels of a partnership with Japan’s Leading-edge Semiconductor Technology Center (LSTC). Tenstorrent’s RISC-V and Chiplet IP will be used to build a state-of-the-art 2nm AI Accelerator, with the ultimate goal of revolutionizing AI performance in Japan.
By the power of Grayskull!
The Grayskull e75 model is a low-profile, half-length PCIe Gen 4 board with a single Grayskull processor, operating at 75W. The more advanced e150 model is a standard height, 3/4 length PCIe Gen 4 board containing one Grayskull processor operating at up to 200W, and balancing power and throughput.
Tenstorrent processors comprise a grid of cores known as Tensix Cores and come with network communication hardware so they can talk with one another directly over networks, instead of through DRAM.
The Grayskull DevKits support a wide range of models, including BERT for natural language processing tasks, ResNet for image recognition, Whisper for speech recognition and translation, YOLOv5 for real-time object detection, and U-Net for image segmentation.
The Grayskull e75 and e150 DevKits are available for purchase now at $599 and $799, respectively.
Culture in animals can be broadly conceptualized as the sum of a population’s behavioural traditions, which, in turn, are defined as behaviours that are transmitted through social learning and that persist in a population over time4. Although culture was once thought to be exclusive to humans and a key explanation of our own evolutionary success, the existence of non-human cultures that change over time is no longer controversial. Changes in the songs of Savannah sparrows5 and humpback whales6,7,8 have been documented over decades. The sweet-potato-washing behaviour of Japanese macaques has also undergone several distinctive modifications since its inception at the hands of ‘Imo’, a juvenile female, in 19539. Imo’s initial behaviour involved dipping a potato in a freshwater stream and wiping sand off with her spare hand, but within a decade it had evolved to include repeated washing in seawater in between bites rather than in fresh water, potentially to enhance the flavour of the potato. By the 1980s, a range of variations had appeared among macaques, including stealing already-washed potatoes from conspecifics, and digging new pools in secluded areas to wash potatoes without being seen by scroungers9,10,11. Likewise, the ‘wide’, ‘narrow’ and ‘stepped’ designs of pandanus tools, which are fashioned from torn leaves by New Caledonian crows and used to fish grubs from logs, seem to have diverged from a single point of origin12. In this manner, cultural evolution can result in both the accumulation of novel traditions, and the accumulation of modifications to these traditions in turn. However, the limitations of non-human cultural evolution remain a subject of debate.
It is clearly true that humans are a uniquely encultured species. Almost everything we do relies on knowledge or technology that has taken many generations to build. No one human being could possibly manage, within their own lifetime, to split the atom by themselves from scratch. They could not even conceive of doing so without centuries of accumulated scientific knowledge. The existence of this so-called cumulative culture was thought to rely on the ‘ratchet’ concept, whereby traditions are retained in a population with sufficient fidelity to allow improvements to accumulate1,2,3. This was argued to require so-called higher-order forms of social learning, such as imitative copying13 or teaching14, which have, in turn, been argued to be exclusive to humans (although, see a review of imitative copying in animals15 for potential examples). But if we strip the definition of cumulative culture back to its bare bones, for a behavioural tradition to be considered cumulative, it must fulfil a set of core requirements1. In short, a beneficial innovation or modification to a behaviour must be socially transmitted among individuals of a population. This process may then occur repeatedly, leading to sequential improvements or elaborations. According to these criteria, there is evidence that some animals are capable of forming a cumulative culture in certain contexts and circumstances1,16,17. For example, when pairs of pigeons were tasked with making repeated flights home from a novel location, they found more efficient routes more quickly when members of these pairs were progressively swapped out, when compared with pairs of fixed composition or solo individuals16. This was thought to be due to ‘innovations’ made by the new individuals, resulting in incremental improvements in route efficiency. However, the end state of the behaviour in this case could, in theory, have been arrived at by a single individual1. It remains unclear whether modifications can accumulate to the point at which the final behaviour is too complex for any individual to innovate itself, but can still be acquired by that same individual through social learning from a knowledgeable conspecific. This threshold, often including the stipulation that re-innovation must be impossible within an individual’s own lifetime, is argued by some to represent a fundamental difference between human and non-human cognition3,13,18.
Bumblebees (Bombus terrestris) are social insects that have been shown to be capable of acquiring complex, non-natural behaviours through social learning in a laboratory setting, such as string-pulling19 and ball-rolling to gain rewards20. In the latter case, they were even able to improve on the behaviour of their original demonstrator. More recently, when challenged with a two-option puzzle-box task and a paradigm allowing learning to diffuse across a population (a gold standard of cultural transmission experiments21, as used previously in wild great tits22), bumblebees were found to acquire and maintain arbitrary variants of this behaviour from trained demonstrators23. However, these previous investigations involved the acquisition of a behaviour that each bee could also have innovated independently. Indeed, some naive individuals were able to open the puzzle box, pull strings and roll balls without demonstrators19,20,23. Thus, to determine whether bumblebees could acquire a behaviour through social learning that they could not innovate independently, we developed a novel two-step puzzle box (Fig. 1a). This design was informed by a lockbox task that was developed to assess problem solving in Goffin’s cockatoos24. Here, cockatoos were challenged to open a box that was sealed with five inter-connected ‘locks’ that had to be opened sequentially, with no reward for opening any but the final lock. Our hypothesis was that this degree of temporal and spatial separation between performing the first step of the behaviour and the reward would make it very difficult, if not impossible, for a naive bumblebee to form a lasting association between this necessary initial action and the final reward. Even if a bee opened the two-step box independently through repeated, non-directed probing, as observed with our previous box23, if no association formed between the combination of the two pushing behaviours and the reward, this behaviour would be unlikely to be incorporated into an individual’s repertoire. If, however, a bee was able to learn this multi-step box-opening behaviour when exposed to a skilled demonstrator, this would suggest that bumblebees can acquire behaviours socially that lie beyond their capacity for individual innovation.
Fig. 1: Two-step puzzle-box design and experimental set-up.
a, Puzzle-box design. Box bases were 3D-printed to ensure consistency. The reward (50% w/w sucrose solution, placed on a yellow target) was inaccessible unless the red tab was pushed, rotating the lid anti-clockwise around a central axis, and the red tab could not move unless the blue tab was first pushed out of its path. See Supplementary Information for a full description of the box design elements. b, Experimental set-up. The flight arena was connected to the nest box with an acrylic tunnel, and flaps cut into the side allowed the removal and replacement of puzzle boxes during the experiment. The sides were lined with bristles to prevent bees escaping. c, Alternative action patterns for opening the box. The staggered-pushing technique is characterized by two distinct pushes (1, blue arrow and 2, red arrow), divided by either flying (green arrows) or walking in a loop around the inner side of the red tab (orange arrow). The squeezing technique is characterized by a single, unbroken movement, starting at the point at which the blue and red tabs meet and pushing through, squeezing between the outer side of the red tab and the outer shield, and making a tight turn to push against the red tab.
The two-step puzzle box (Fig. 1a) relied on the same principles as our previous single-step, two-option puzzle box23. To access a sucrose-solution reward, placed on a yellow target, a blue tab had to first be pushed out of the path of a red tab, which could then be pushed in turn to rotate a clear lid around a central axis. Once rotated far enough, the reward would be exposed beneath the red tab. A sample video of a trained demonstrator opening the two-step box is available (Supplementary Video 1). Our experiments were conducted in a specially constructed flight arena, attached to a colony’s nest box, in which all bees that were not currently undergoing training or testing were confined (Fig. 1b).
In our previous study, several bees successfully learned to open the two-option, single-step box during control population experiments, which were conducted in the absence of a trained demonstrator across 6–12 days23. Thus, to determine whether the two-step box could be opened by individual bees starting from scratch, we sought to conduct a similar experiment. Two colonies (C1 and C2) took part in these control population experiments for 12 days, and one colony (C3) for 24 days. In brief, on 12 or 24 consecutive days, bees were exposed to open two-step puzzle boxes for 30 min pre-training and then to closed boxes for 3 h (meaning that colonies C1 and 2 were exposed to closed boxes for 36 h total, and colony C3 for 72 h total). No trained demonstrator was added to any group. On each day, bees foraged willingly during the pre-training, but no boxes were opened in either colony during the experiment. Although some bees were observed to probe around the components of the closed boxes with their proboscises, particularly in the early population-experiment sessions, this behaviour generally decreased as the experiment progressed. A single blue tab was opened in full in colony C1, but this behaviour was neither expanded on nor repeated.
Learning to open the two-step box was not trivial for our demonstrators, with the finalized training protocol taking around two days for them to complete (compared with several hours for our previous two-option, single-step box23). Developing a training protocol was also challenging. Bees readily learned to push the rewarded red tab, but not the unrewarded blue tab, which they would not manipulate at all. Instead, they would repeatedly push against the blocked red tab before giving up. This necessitated the addition of a temporary yellow target and reward beneath the blue tab, which, in turn, required the addition of the extended tail section (as seen in Fig. 1a), because during later stages of training this temporary target had to be removed and its absence concealed. This had to be done gradually and in combination with an increased reward on the final target, because bees quickly lost their motivation to open any more boxes otherwise. Frequently, reluctant bees had to be coaxed back to participation by providing them with fully opened lids that they did not need to push at all. In short, bees seemed generally unwilling to perform actions that were not directly linked to a reward, or that were no longer being rewarded. Notably, when opening two-step boxes after learning, demonstrators frequently pushed against the red tab before attempting to push the blue, even though they were able to perform the complete behaviour (and subsequently did so). The combination of having to move away from a visible reward and take a non-direct route, and the lack of any reward in exchange for this behaviour, suggests that two-step box-opening would be very difficult, if not impossible, for a naive bumblebee to discover and learn for itself—in line with the results of the control population experiment.
For the dyad experiments, a pair of bees, including one trained demonstrator and one naive observer, was allowed to forage on three closed puzzle boxes (each filled with 20 μl 50% w/w sucrose solution) for 30–40 sessions, with unrewarded learning tests given to the observer in isolation after 30, 35 and 40 joint sessions. With each session lasting a maximum of 20 min, this meant that observers could be exposed to the boxes and the demonstrator for a total of 800 min, or 13.3 h (markedly less time than the bees in the control population experiments, who had access to the boxes in the absence of a demonstrator for 36 or 72 h total). If an observer passed a learning test, it immediately proceeded to 10 solo foraging sessions in the absence of the demonstrator. The 15 demonstrator and observer combinations used for the dyad experiments are listed in Table 1, and some demonstrators were used for multiple observers. Of the 15 observers, 5 passed the unrewarded learning test, with 3 of these doing so on the first attempt and the remaining 2 on the third. This relatively low number reflected the difficulty of the task, but the fact that any observers acquired two-step box-opening at all confirmed that this behaviour could be socially learned.
Table 1 Combinations of demonstrators and observers, with outcomes
The post-learning solo foraging sessions were designed to further test observers’ acquisition of two-step box-opening. Each session lasted up to 10 min, but 50 μl 50% sucrose solution was placed on the yellow target in each box: as Bombus terrestris foragers have been found to collect 60–150 μl sucrose solution per foraging trip depending on their size, this meant that each bee could reasonably be expected to open two boxes per session25. Although all bees who proceeded to the solo foraging stage repeated two-step box-opening, confirming their status as learners, only two individuals (A-24 and A-6; Table 1) met the criterion to be classified as proficient learners (that is, they opened 10 or more boxes). This was the same threshold applied to learners in our previous work with the single-step two-option box23. However, it should be noted that learners from our present study had comparatively limited post-learning exposure to the boxes (a total of 100 min on one day) compared with those from our previous work. Proficient learners from our single-step puzzle-box experiments typically attained proficiency over several days of foraging, and had access to boxes for 180 min each day for 6–12 days23. Thus, these comparatively low numbers of proficient bees are perhaps unsurprising.
Two different methods of opening the two-step puzzle box were observed among the trained demonstrators during the dyad experiments, and were termed ‘staggered-pushing’ and ‘squeezing’ (Fig. 1c; Supplementary Video 2). This finding essentially transformed the experiment into a ‘two-action’-type design, reminiscent of our previous single-step, two-option puzzle-box task23. Of these techniques, squeezing typically resulted in the blue tab being pushed less far than staggered-pushing did, often only just enough to free the red tab, and the red tab often shifted forward as the bee squeezed between this and the outer shield. Among demonstrators, the squeezing technique was more common, being adopted as the main technique by 6 out of 9 individuals (Table 1). Thus, 10 out of 15 observers were paired with a squeezing demonstrator.
Although not all observers that were paired with squeezing demonstrators learned to open the two-step box (5 out of 10 succeeded), all observers paired with staggered-pushing demonstrators (n = 5) failed to learn two-step box-opening. This discrepancy was not due to the number of demonstrations being received by the observers: there was no difference in the number of boxes opened by squeezing demonstrators compared with staggered-pushing demonstrators when the number of joint sessions was accounted for (unpaired t-test, t = −2.015, P = 0.065, degrees of freedom (df) = 13, 95% confidence interval (CI) = −3.63–0.13; Table 2). This might have been because the squeezing demonstrators often performed their squeezing action several times, looping around the red tab, which lengthened the total duration of the behaviour despite the blue tab being pushed less than during staggered-pushing. Closer investigation of the dyads that involved only squeezing demonstrators revealed that demonstrators paired with observers that failed to learn tended to open fewer boxes, but this difference was not significant. There was also no difference between these dyads and those that included a staggered-pushing demonstrator (one-way ANOVA, F = 2.446, P = 0.129, df = 12; Table 2 and Fig. 2a). Together, these findings suggested that demonstrator technique might influence whether the transmission of two-step box-opening was successful. Notably, successful learners also appeared to acquire the specific technique used by their demonstrator: in all cases, this was the squeezing technique. In the solo foraging sessions recorded for successful learners, they also tended to preferentially adopt the squeezing technique (Table 1). The potential effect of certain demonstrators being used for multiple dyads is analysed and discussed in the Supplementary Results (see Supplementary Table 2 and Supplementary Fig. 4).
Table 2 Characteristics of dyad demonstrators and observers
Fig. 2: Demonstrator action patterns affect the acquisition of two-step box-opening by observers.
a, Demonstrator opening index. The demonstrator opening index was calculated for each dyad as the total incidence of box-opening by the demonstrator/number of joint foraging sessions. b, Observer following index. Following behaviour was defined as the observer being present on the surface of the box, within a bee’s length of the demonstrator, while the demonstrator performed box-opening. The observer following index was calculated as the total duration of following behaviour/number of joint foraging sessions. Data in a,b were analysed using one-way ANOVA and are presented as box plots. The bounds of the box are drawn from quartile 1 to quartile 3 (showing the interquartile range), the horizontal line within shows the median value and the whiskers extend to the most extreme data point that is no more than 1.5 × the interquartile range from the edge of the box. n = 15 independent experiments (squeezing-pass group, n = 5; squeezing-fail group, n = 5; and staggered-pushing-fail (stagger-fail) group, n = 5). c, Duration of following behaviour over the dyad joint foraging sessions. Following behaviour significantly increased with the number of joint foraging sessions, with the sharpest increase seen in dyads that included a squeezing demonstrator and an observer that successfully acquired two-step box-opening. Data were analysed using Spearman’s rank correlation coefficient tests (two-tailed), and the figures show measures taken from each observer in each group. Data for individual observers are presented in Supplementary Fig. 1.
To determine whether observer behaviour might have differed between those who passed and failed, we investigated the duration of their ‘following’ behaviour, which was a distinctive behaviour that we identified during the joint foraging sessions. Here, an observer followed closely behind the demonstrator as it walked on the surface of the box, often close enough to make contact with the demonstrator’s body with its antennae (Supplementary Video 3). In the case of squeezing demonstrators, which often made several loops around the red tab, a following observer would make these loops also. To ensure we quantified only the most relevant behaviour, we defined following behaviour as ‘instances in which an observer was present on the box surface, within a single bee’s length of the demonstrator, while it performed two-step box-opening’. Thus, following behaviour could be recorded only after the demonstrator began to push the blue tab, and before it accessed the reward. This was quantified for each joint foraging session for the dyad experiments (Supplementary Table 1). There was no significant correlation between the demonstrator opening index and the observer following index (Spearman’s rank correlation coefficient, rs = 0.173, df = 13, P = 0.537; Supplementary Fig. 2), suggesting that increases in following behaviour were not due simply to there being more demonstrations of two-step box-opening available to the observer.
There was no statistically significant difference in the following index between dyads with squeezing and dyads with staggered-pushing demonstrators; between dyads in which observers passed and those in which they failed; or when both demonstrator preference and learning outcome were accounted for (Table 2). This might have been due to the limited sample size. However, the following index tended to be higher in dyads in which the observer successfully acquired two-step box-opening than in those in which the observer failed (34.82 versus 16.26, respectively; Table 2) and in dyads with squeezing demonstrators compared with staggered-pushing demonstrators (25.78 versus 15.76, respectively; Table 2). When both factors were accounted for, following behaviour was most frequent in dyads with a squeezing demonstrator and an observer that successfully acquired two-step box-opening (34.82 versus 16.75 (‘squeezing-fail’ group) versus 15.76 (‘staggered-pushing-fail’ group); Table 2).
There was, however, a strong positive correlation between the duration of following behaviour and the number of joint foraging sessions, which equated to time spent foraging alongside the demonstrator. This association was present in dyads from all three groups but was strongest in the squeezing-pass group (Spearman’s rank order correlation coefficient, rs = 0.408, df = 168, P < 0.001; Fig. 2c). This suggests, in general, either that the latency between the start of the demonstration and the observer following behaviour decreased over time, or that observers continued to follow for longer once arriving. However, the observers from the squeezing-pass group tended to follow for longer than any other group, and the duration of their following increased more rapidly. This indicates that following a conspecific demonstrator as it performed two-step box-opening (and, specifically, through squeezing) was important to the acquisition of this behaviour by an observer.
My Dad has a Galaxy S22, and he’s perfectly content, but it’s time for him to upgrade. He’s paid off the phone with Verizon and they owe him a new one. I suggested an iPhone 15, for my own sanity as his tech support, but he’s staying loyal to Samsung, so I offered to help him set up a Galaxy S24. He refused. That phone has AI, he says, and he doesn’t want an AI phone. I get it, but I’m here with news that AI has been greatly exaggerated. There is nothing to worry about when it comes to AI on your phone.
Actually, there is no artificial intelligence (AI) on your phone. I’ll tip my hat to Samsung on this one, they have been calling their new features “advanced intelligence,” and that’s an apt distinction. If you have concern over general AI and what it means to the future of humanity, I can assure you that your concept of AI does not come close to what you’ll find on a smartphone today.
Do you see artificial intelligence in this phone? Nope (Image credit: Google)
Before I get into what AI on a smartphone really is, let’s put away concerns. AI is not going to make decisions for you, not in any way. It won’t make phone calls, or send text messages. It won’t change anything, you still make all the changes.
The new AI features on the Samsung Galaxy S24 and Google Pixel 8 make suggestions – for photos or text messages – but they don’t take any action until you press the buttons first, then hit “Done” when you’re done. You can’t make the AI work by accident. It won’t work without you.
This ‘AI’ isn’t thinking. It doesn’t have ideas. It isn’t dreaming. It isn’t a brain or a mind in any way. It’s a computer, which means it’s just a bunch of very tiny switches, and little more.
The AI doesn’t judge you. Don’t do anything weird, though, because all of your input is going through Google or Samsung, and they will totally judge you, especially if you use AI for anything untoward. Actually, I’m much more worried about the way bad people will use AI than I am worried about bad AI itself.
AI is happening behind the scenes
So, what is AI on a smartphone today? What are the new AI features that Samsung and Google brag about? There are two ways to think about AI on smartphones today. There’s AI that happens in front of you, and AI that is happening in the background.
Do you wish that your phone would learn some of your habits, so you didn’t have to repeat yourself? Like, you open the same apps every morning, but you forget to add those apps to your homescreen. You end up searching for Calendar every time you need a calendar. Nowadays, your phone will have some box where it recommends apps. Instead of searching, Calendar will just be there. That’s AI.
Apple’s iPhone uses AI behind the scenes for recommendations (Image credit: Future / Philip Berne)
Doesn’t it seem like your phone often recommends just the right app at the right time? Have you ever gone to check your balance right outside your bank ATM and your phone is offering you the banking app? Has it offered Yelp or Uber Eats around dinner time? That’s AI, and it’s nothing to worry about.
There are tons of optimizations, recommendations, and behind-the-scenes features that are learning your patterns and making the phone better as a result. That’s AI. When these companies use the term AI, what they really mean is ‘very advanced pattern recognition.’
At best, it happens where you can’t see, and your phone just works better, offering you the features you need at the right moment, or saving power so the battery lasts longer. Then there is the AI that you can see, that gets in your face.
AI that makes suggestions and changes things
So far, these are mostly editing and suggestion features. They work pretty well. Your phone can help you with your writing, your photos, and even your ideas. It can edit and make suggestions on how to make that writing, those photos, and even those ideas better.
The way this happens is through ‘very advanced pattern recognition.’ There’s no actual intelligence, and certainly no ‘artificial intelligence,’ in the way we might imagine an intelligent computer with its own thoughts and ideas. The Terminator isn’t living inside your Galaxy S24 Ultra.
When Samsung’s new Galaxy AI suggests edits to a text message, it is using its training experience reading millions of text messages. Samsung fed those messages into a very smart computer, a computer great at ‘very advanced pattern recognition,’ and now that computer can make text messages that match a situation.
Galaxy AI makes wallpaper, it doesn’t want to end humanity (Image credit: Future | Alex Walker-Todd)
Google fed millions of photos into a very smart computer. Now Google Photos can look at your image and determine how it is like all of the other photos it has seen. It can suggest edits to make your picture more like the millions of photos it knows. It can change your background or add and remove people, because it knows what photos are supposed to look like, and it can make yours match what it knows.
That’s all it knows. It doesn’t think, or create anything truly new. It just has millions upon millions of examples that it learned, and it is very good at creating an average idea from those examples. Even better, every time the AI makes a bad suggestion, it gets a little better next time. Just a little. Like I said, it’s pattern recognition, so every opportunity helps it recognize a more complicated pattern.
It’s very cool technology, and getting more useful all the time, but it’s nothing to worry about, not yet. The worst case scenario for AI right now is for humans to use it for nefarious purposes. The AI can make writing suggestions, so if you are writing a scam email, it may help with that. If you are creating a fake photograph, AI can help with that. It isn’t the AI technology that is frightening. Sadly, it’s those people who use technology that we should worry about.
It’s hard to expect much from Apple’s new M3-equipped MacBook Airs. The 13-inch M2 model, released in 2022, was the first major redesign for Apple’s most popular notebook in over a decade. Last year, Apple finally gave its fanatics a big-screen ultraportable notebook with the 15-inch MacBook Air. This week, we’ve got the same two computers with slightly faster chips. They didn’t even get a real launch event from Apple, just a sleepy Monday morning press release. They look the same and are a bit faster than before — what else is there to say?
Now, I’m not saying these aren’t great computers. It’s just that we’ve been a bit spoiled by Apple’s laptops over the last few years. The M3 MacBook Air marks the inevitable innovation plateau for the company, following the monumental rise of its mobile chips and a complete refresh of its laptops and desktops. It’s like hitting cruising altitude after the excitement of takeoff — things are stable and comfortable for Apple and consumers alike.
Apple
Apple’s latest MacBook Air takes everything we loved about the M2 redesign — a sleeker and lighter case — and adds more power thanks to an M3 chip.
Pros
Sturdy and sleek design
Fast performance thanks to M3 chip
Excellent 13-inch screen
Great keyboard and trackpad
Solid quad-speaker array
Cons
Charging and USB-C ports are only on one side
$1,099 at Amazon
Apple
Apple’s big-screen MacBook Air still looks and feels great, and it’s faster thanks to an M3 chip.
Pros
Sturdy and sleek design
Fast performance thanks to M3 chip
Excellent 15-inch screen
Great keyboard and trackpad
Solid six-speaker array
Cons
Charging and USB-C ports are only on one side
$1,299 at Amazon
M3 MacBook Air vs the M2 MacBook Air
Even though they look exactly the same as before, the M3 MacBook Air models have a few new features under the hood. For one, they support dual external displays, but only when their lids are closed. That was something even the M3-equipped 14-inch MacBook Pro lacked at launch, but Apple says the feature is coming to that device via a future software update. Having dual screen support is particularly useful for office workers who may need to drop their computers onto temporary desks, but it could also be helpful for creatives with multiple monitors at home. (If you absolutely need to have your laptop display on alongside two or more external monitors, you’ll have to opt for a MacBook Pro with an M3 Pro or Max chip instead.)
Both new MacBook Air models also support Wi-Fi 6E, an upgrade over the previous Wi-Fi 6 standard with faster speeds and dramatically lower latency. You’ll need a Wi-Fi 6E router to actually see those benefits, though. According to Intel, Wi-Fi 6E’s ability to tap into seven 160MHz channels helps it avoid congested Wi-Fi 6 spectrum. Basically, you may actually be able to see gigabit speeds more often. (With my AT&T gigabit fiber connection and Wi-Fi 6 gateway, I saw download speeds of around 350 Mbps and uploads ran between 220 Mbps and 320 Mbps on both systems from my basement office. Both upload and download speeds leapt to 700 Mbps when I was on the same floor as the gateway.)
Photo by Devindra Hardawar/Engadget
Design and weight
Two years after the 13-inch M2 MacBook Air debuted, the M3 follow-up is just as sleek and attractive. It seems impossibly thin for a notebook, measuring 0.44 inches thick, and is fairly light at 2.7 pounds. We’ve seen ultraportables like LG’s Gram and the ZenBook S13 OLED that are both lighter and thinner than Apple’s hardware, but the MacBook Air still manages to feel like a more premium package. Its unibody aluminum case feels as smooth a river stone yet as sturdy as a boulder. It’s a computer I simply love to touch.
Photo by Devindra Hardawar/Engadget
The 15-inch M3 MacBook Air is similarly thin, but clocks in half a pound heavier at 3.2 pounds. It’s still relatively light for its size, but the additional bulk makes it feel more unwieldy than the 13-inch model. I can easily slip either MacBook Air model into a tote bag when running out to grab my kids from school, but the larger model’s length makes it more annoying to carry.
For some users, though, that extra heft will be worth it. The bigger MacBook Air sports a 15.3-inch Liquid Retina screen with a sharp 2,880 by 1,864 (224 pixels per inch) resolution, making it better suited for multitasking with multiple windows or working in media editing apps. It’s also a better fit for older or visually impaired users, who may have to scale up their displays to make them more readable. (This is something I’ve noticed while shopping for computers for my parents and other older relatives. 13-inch laptops inevitably become hard to work on, unless you’re always wearing bifocals.)
While I’m impressed that Apple finally has a large, consumer-focused laptop in its lineup, I still prefer the 13-inch MacBook Air. I spend most of my day writing, Slacking with colleagues, editing photos and talking with companies over video conferencing apps, all of which are easy to do on a smaller screen. If I was directly editing more episodes of the Engadget Podcast, or chopping up video on my own, though, I’d bump up to the 14-inch MacBook Pro with an M3 Pro chip. Even then, I wouldn’t have much need for a significantly larger screen.
A lonely headphone jack that could use a USB-C companion. (Photo by Devindra Hardawar/Engadget)
It’s understandable why Apple wouldn’t want to tweak the Air’s design too much, given that it was just redone a few years ago. Still, I’d love to see a USB-C port on the right side of the machine, just to make charging easier in every location. But I suppose I should just be happy Apple hasn’t removed the headphone jack, something that’s happening all too frequently in new 13-inch notebooks, like the XPS 13.
Hardware
For our testing, Apple sent the “midnight” 13-inch MacBook Air (which is almost jet black and features a fingerprint-resistant coating that actually works), as well as the silver 15-inch model. Both computers were powered by an M3 chip with a 10-core GPU, 16GB of RAM and a 512GB SSD. While these MacBooks start at $1,099 and $1,299, respectively, the configurations we tested cost $400 more. Keep that in mind if you’re paying attention to our benchmarks, as you’ll definitely see lower figures on the base models. (The cheapest 13-inch offering only has 8GB of RAM, a 256GB SSD and an 8-core GPU, while the entry-level 15-inch unit has the same RAM and storage, along with a 10-core GPU.)
Geekbench 6 CPU
Geekbench 6 GPU
Cinebench R23
3DMark Wildlife Extreme
Apple MacBook Air 13-inch (M3, 2024)
3,190/12,102
30,561
1,894/9,037
8,310
Apple MacBook Air 15-inch (M3, 2024)
3,187/12,033
30,556
1,901/9,733
8,253
Apple MacBook Air 13-inch (M2, 2022)
2,570/9,650
25,295
1,576/7,372
6,761
Apple MacBook Pro 14-inch (M3, 2023)
3,142/11,902
30,462
1,932/10,159
8,139
M3 chip performance
I didn’t expect to see a huge performance boost on either MacBook Air, but our benchmarks ended up surprising me. Both laptops scored around 300 points higher in the Cinebench R23 single-core test, compared to the M2 MacBook Air. And when it came to the more strenuous multi-core CPU test, the 13-inch M3 Air was around 1,700 points faster, while the 15-inch model was around 2,400 points faster. (Since both machines are fan-less, there’s a good chance the larger case of the 15-inch Air allows for slightly better performance under load.)
There was a more noticeable difference in Geekbench 6, where the M3 models were around 40 percent faster than before. Apple is touting more middling improvements over the M2 chips — 17 percent faster single-core performance, 21 percent speedier multi-core workloads and 15 percent better GPU workloads — but it’s nice to see areas where performance is even better. Really, though, these aren’t machines meant to replace M2 systems — the better comparisons are how they measure up to nearly four-year-old M1 Macs or even creakier Intel models. Apple claims the M3 chip is up to 60 percent faster than the M1, but in my testing I saw just a 35 percent speed bump in Cinebench’s R23 multi-core test.
When it comes to real-world performance, I didn’t notice a huge difference between either M3-equipped MacBook Air, compared to the M2 model I’ve been using for the past few years. Apps load just as quickly, multitasking isn’t noticeably faster (thank goodness they have 16GB of RAM), and even photo editing isn’t significantly speedier. This is a good time to point out that the M2 MacBook Air is still a fine machine, and it’s an even better deal now thanks to a lower $999 starting price. As we’ve said, the best thing about the existence of the M3 Airs is that they’ve made the M2 models cheaper. You’ll surely find some good deals from stores clearing out older stock and refurbished units, as well as existing owners selling off their M2 machines.
Gaming and productivity work
I’ll give the M3 MacBook Airs this: they’re noticeably faster for gaming. I was able to run Lies of P in 1080p+ (1,920 by 1,200) with high graphics settings and see a smooth 60fps most of the time. It occasionally dipped into the low-50fps range, but that didn’t affect the game’s playability much. The director’s cut of Death Stranding was also smooth and easy to play at that resolution, so long as I didn’t crank up the graphics settings too much. It’s nice to have the option for some serious games on Macs for once. And if you want more variety, you can also stream high-end games over Xbox’s cloud streaming or NVIDIA’s GeForce Now.
In addition to being a bit faster than before, the 13-inch and 15-inch MacBook Airs are simply nice computers to use. Their 500-nit screens support HDR and are bright to use outdoors in sunlight. While they’re not as impressive as the ProMotion MiniLED displays on the MacBook Pros, they’ll get the job done for most users. Apple’s quad and six-speaker arrays are also best-in-class, and the 1080p webcams on both computers are perfect for video conferencing (especially when paired with Apple’s camera tweaks for brightness and background blurring). And I can’t say enough good things about the MacBook Air’s responsive keyboard and smooth trackpad – I wish every laptop used them.
Photo by Devindra Hardawar/Engadget
Battery
Unfortunately, the short turn-around time for this review prevented me from running a complete battery test for these computers. At the moment, though, I can say that both machines only used up 40 percent of battery life while playing a 4K fullscreen video at full brightness for over 10 hours. Apple claims they’ll play an Apple TV video for up to 18 hours, as well as browse the web wirelessly for up to 15 hours. My testing shows they’ll definitely last far more than a typical workday. (I would often go three days without needing to charge the 13-inch M2 MacBook Air. Based on what I’ve seen so far, I expect similar performance from the M3 models.)
Photo by Devindra Hardawar/Engadget
Wrap-up
There aren’t any major surprises with the 13-inch and 15-inch M3 MacBook Air, but after years of continual upgrades, that’s to be expected. They’re great computers with excellent performance, gorgeous screens and incredible battery life. And best of all, their introduction also pushes down the prices of the still-great M2 models, making them an even better deal.
AI-run labs have arrived — such as this one in Suzhou, China.Credit: Qilai Shen/Bloomberg/Getty
Scientists of all stripes are embracing artificial intelligence (AI) — from developing ‘self-driving’ laboratories, in which robots and algorithms work together to devise and conduct experiments, to replacing human participants in social-science experiments with bots1.
Artificial intelligence and illusions of understanding in scientific research
In a Perspective article2 published in Nature this week, social scientists say that AI systems pose a further risk: that researchers envision such tools as possessed of superhuman abilities when it comes to objectivity, productivity and understanding complex concepts. The authors argue that this put researchers in danger of overlooking the tools’ limitations, such as the potential to narrow the focus of science or to lure users into thinking they understand a concept better than they actually do.
Scientists planning to use AI “must evaluate these risks now, while AI applications are still nascent, because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline”, write co-authors Lisa Messeri, an anthropologist at Yale University in New Haven, Connecticut, and Molly Crockett, a cognitive scientist at Princeton University in New Jersey.
The peer-reviewed article is a timely and disturbing warning about what could be lost if scientists embrace AI systems without thoroughly considering such hazards. It needs to be heeded by researchers and by those who set the direction and scope of research, including funders and journal editors. There are ways to mitigate the risks. But these require that the entire scientific community views AI systems with eyes wide open.
ChatGPT is a black box: how AI research can break it open
To inform their article, Messeri and Crockett examined around 100 peer-reviewed papers, preprints, conference proceedings and books, published mainly over the past five years. From these, they put together a picture of the ways in which scientists see AI systems as enhancing human capabilities.
In one ‘vision’, which they call AI as Oracle, researchers see AI tools as able to tirelessly read and digest scientific papers, and so survey the scientific literature more exhaustively than people can. In both Oracle and another vision, called AI as Arbiter, systems are perceived as evaluating scientific findings more objectively than do people, because they are less likely to cherry-pick the literature to support a desired hypothesis or to show favouritism in peer review. In a third vision, AI as Quant, AI tools seem to surpass the limits of the human mind in analysing vast and complex data sets. In the fourth, AI as Surrogate, AI tools simulate data that are too difficult or complex to obtain.
Informed by anthropology and cognitive science, Messeri and Crockett predict risks that arise from these visions. One is the illusion of explanatory depth3, in which people relying on another person — or, in this case, an algorithm — for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.
How to stop AI deepfakes from sinking society — and science
Another risk is that research becomes skewed towards studying the kinds of thing that AI systems can test — the researchers call this the illusion of exploratory breadth. For example, in social science, the vision of AI as Surrogate could encourage experiments involving human behaviours that can be simulated by an AI — and discourage those on behaviours that cannot, such as anything that requires being embodied physically.
There’s also the illusion of objectivity, in which researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data. “There’s a risk that we forget that there are certain questions we just can’t answer about human beings using AI tools,” says Crockett. The illusion of objectivity is particularly worrying given the benefits of including diverse viewpoints in research.
Avoid the traps
If you’re a scientist planning to use AI, you can reduce these dangers through a number of strategies. One is to map your proposed use to one of the visions, and consider which traps you are most likely to fall into. Another approach is to be deliberate about how you use AI. Deploying AI tools to save time on something your team already has expertise in is less risky than using them to provide expertise you just don’t have, says Crockett.
Journal editors receiving submissions in which use of AI systems has been declared need to consider the risks posed by these visions of AI, too. So should funders reviewing grant applications, and institutions that want their researchers to use AI. Journals and funders should also keep tabs on the balance of research they are publishing and paying for — and ensure that, in the face of myriad AI possibilities, their portfolios remain broad in terms of the questions asked, the methods used and the viewpoints encompassed.
All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed. For decades, and long before AI was a reality for most people, social scientists have studied AI. Everyone — including researchers of all kinds — must now listen.
Do you really like the aesthetic of bowling but have no interest in the game itself? In January, Xbox released the special edition Dream Vapor controller with swirls that look like they’ve been pulled right from a bowling ball. Now, the Dream Vapor controller — which is a great accessory for the Xbox Series X|S, Xbox One or Windows — is on sale for $58, down from $70. The 17 percent discount puts the model at the lowest price we’ve seen yet.
Xbox
Xbox’s Dream Vapor controller is — dare we say — beautiful. It has pink and purple accents that swirl together to create a calm, aesthetically pleasing look. Even the buttons are in a light pink with purple accents. The wireless controller works like its counterparts, offering 40 hours of battery life, custom button mapping and a share button.
The Dream Vapor model isn’t the only Xbox controller available for a record-low price. If you’re looking for a basic new controller, the Robot White Xbox controller is down to $45 from $60 — a 25 percent discount. It’s a sleek but fun option with ABXY buttons in a range of bright colors.
Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.
Now the first data of their kind show a link between these microplastics and human health. A study of more than 200 people undergoing surgery found that nearly 60% had microplastics or even smaller nanoplastics in a main artery1. Those who did were 4.5 times more likely to experience a heart attack, a stroke or death in the approximately 34 months after the surgery than were those whose arteries were plastic-free.
“This is a landmark trial,” says Robert Brook, a physician-scientist at Wayne State University in Detroit, Michigan, who studies the environmental effects on cardiovascular health and was not involved with the study. “This will be the launching pad for further studies across the world to corroborate, extend and delve into the degree of the risk that micro- and nanoplastics pose.”
But Brook, other researchers and the authors themselves caution that this study, published in The New England Journal of Medicine on 6 March, does not show that the tiny pieces caused poor health. Other factors that the researchers did not study, such as socio-economic status, could be driving ill health rather than the plastics themselves, they say.
Plastic planet
Scientists have found microplastics just about everywhere they’ve looked: in oceans; in shellfish; in breast milk; in drinking water; wafting in the air; and falling with rain.
Such contaminants are not only ubiquitous but also long-lasting, often requiring centuries to break down. As a result, cells responsible for removing waste products can’t readily degrade them, so microplastics accumulate in organisms.
Microplastics are everywhere — but are they harmful?
In humans, they have been found in the blood and in organs such as the lungs and placenta. However, just because they accumulate doesn’t mean they cause harm. Scientists have been worried about the health effects of microplastics for around 20 years, but what those effects are has proved difficult to evaluate rigorously, says Philip Landrigan, a paediatrician and epidemiologist at Boston College in Chestnut Hill, Massachusetts.
Giuseppe Paolisso, an internal-medicine physician at the University of Campania Luigi Vanvitelli in Caserta, Italy, and his colleagues knew that microplastics are attracted to fat molecules, so they were curious about whether the particles would build up in fatty deposits called plaques that can form on the lining of blood vessels. The team tracked 257 people undergoing a surgical procedure that reduces stroke risk by removing plaque from an artery in the neck.
Blood record
The researchers put the excised plaques under an electron microscope. They saw jagged blobs — evidence of microplastics — intermingled with cells and other waste products in samples from 150 of the participants. Chemical analyses revealed that the bulk of the particles were composed of either polyethylene, which is the most used plastic in the world and is often found in food packaging, shopping bags and medical tubing, or polyvinyl chloride, known more commonly as PVC or vinyl.
Microplastic particles (arrows) infiltrate a living immune cell called a macrophage that was removed from a fatty deposit in a study participant’s blood vessel.Credit: R. Marfella et al./N Engl J Med
On average, participants who had more microplastics in their plaque samples also had higher levels of biomarkers for inflammation, analyses revealed. That hints at how the particles could contribute to ill health, Brook says. If they help to trigger inflammation, they might boost the risk that a plaque will rupture, spilling fatty deposits that could clog blood vessels.
Compared to participants who didn’t have microplastics in their plaques, participants who did were younger; more likely to be male; more likely to smoke and more likely to have diabetes or cardiovascular disease. Because the study included only people who required surgery to reduce stroke risk, it is unknown whether the link holds true in a broader population.
Brook is curious about the 40% of participants who showed no evidence of microplastics in their plaques, especially given that it is nearly impossible to avoid plastics altogether. Study co-author Sanjay Rajagopalan, a cardiologist at Case Western Reserve University in Cleveland, Ohio, says it’s possible that these participants behave differently or have different biological pathways for processing the plastics, but more research is needed.
Researchers have fought for more input into the process, noting that progress on the treaty has been too slow. The latest study is likely to light a fire under negotiators when they gather in Ottawa in April, says Landrigan, who co-authored a report2 that recommended a global cap on plastic production.
While Rajagopalan awaits further data on microplastics, his findings have already had an impact on his daily life. “I’ve had a much more conscious, intentional look at my own relationship with plastics,” he says. “I hope this study brings some introspection into how we, as a society, use petroleum-derived products to reshape the biosphere.”
Most smartphone games are designed to be played with touch controls first and foremost. But if you want to stream games from an Xbox or PlayStation, or if you gravitate toward games with more complex control schemes like Call of Duty: Mobile or Diablo Immortal, a mobile gamepad like the Backbone One can make things more comfortable.
If this sounds appealing to you, Backbone is running a sale that brings the Lightning-based version of the One down to $70 at Amazon, Best Buy, Target and its own online store. While that’s not an all-time low, it’s still $30 off the controller’s usual going rate.
Photo by Mat Smith / Engadget
This is a 30 percent discount on the Lightning-based version of Backbone’s excellent mobile game controller.
$70 at Amazon
In general, discounts on the device have been uncommon. The offer applies to both the standard black model and the PlayStation-branded white model, which are functionally the same but use different icons. The discount technically started earlier this week, but Backbone says it’ll run through March 10. Unfortunately, the sale does not extend to the USB-C version of the device, so Android users or those who plan on upgrading to an iPhone 15 anytime soon should pass.
If you plan to play on an iPhone 14 or older for the next couple of years, though, this deal should be worthwhile. As my colleague Mat Smith noted in his review, the One fits snugly and works immediately with remote streaming apps and virtually every iOS game with controller support. It has all the requisite buttons to play modern games, including pressure-sensitive triggers and analog joysticks, along with a built-in headphone jack and a pass-through charging port. Its clicky face buttons are on the noisy side, and its d-pad is somewhat spongy. Still, its rounded grips keep it comfortable to hold over time, and it balances its weight better than an Xbox or PS5 pad hooked up to a mobile gaming clip. It also comes with a handy companion app, which you can use to quickly launch games and start party chats. If nothing else, it should be a more cost-effective alternative to cloud gaming handhelds like the PlayStation Portal.
Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.
The National Science Library of the Chinese Academy of Sciences in Beijing.Credit: Yang Qing/Imago via Alamy
China has updated its list of journals that are deemed to be untrustworthy, predatory or not serving the Chinese research community’s interests. Called the Early Warning Journal List, the latest edition, published last month, includes 24 journals from about a dozen publishers. For the first time, it flags journals that exhibit misconduct called citation manipulation, in which authors try to inflate their citation counts.
Yang Liying studies scholarly literature at the National Science Library, Chinese Academy of Sciences, in Beijing. She leads a team of about 20 researchers who produce the annual list, which was launched in 2020 and relies on insights from the global research community and analysis of bibliometric data.
The list is becoming increasingly influential. It is referenced in notices sent out by Chinese ministries to address academic misconduct, and is widely shared on institutional websites across the country. Journals included in the list typically see submissions from Chinese authors drop. This is the first year the team has revised its method for developing the list; Yang speaks to Nature about the process, and what has changed.
How do you go about creating the list every year?
We start by collecting feedback from Chinese researchers and administrators, and we follow global discussions on new forms of misconduct to determine the problems to focus on. In January, we analyse raw data from the science-citation database Web of Science, provided by the publishing-analytics firm Clarivate, based in London, and prepare a preliminary list of journals. We share this with relevant publishers, and explain why their journals could end up on the list.
Sometimes publishers give us feedback and make a case against including their journal. If their response is reasonable, we will remove it. We appreciate suggestions to improve our work. We never see the journal list as a perfect one. This year, discussions with publishers cut the list from around 50 journals down to 24.
Yang Liying studies scholarly literature at the National Science Library and manages a team of 20 to put together the Early Warning Journal List.Credit: Yang Liying
What changes did you make this year?
In previous years, journals were categorized as being high, medium or low risk. This year, we didn’t report risk levels because we removed the low-risk category, and we also realized that Chinese researchers ignore the risk categories and simply avoid journals on the list altogether. Instead, we provided an explanation of why the journal is on the list.
In previous years, we included journals with publication numbers that increased very rapidly. For example, if a journal published 1,000 articles one year and then 5,000 the next year, our initial logic was that it would be hard for these journals to maintain their quality-control procedures. We have removed this criterion this year. The shift towards open access has meant that it is possible for journals to receive a large number of manuscripts, and therefore rapidly increase their article numbers. We don’t want to disturb this natural process decided by the market.
You also introduced journals with abnormal patterns of citation. Why?
We noticed that there has been a lot of discussion on the subject among researchers around the world. It’s hard for us to say whether the problem comes from the journals or from the authors themselves. Sometimes groups of authors agree to this citation manipulation mutually, or they use paper mills, which produce fake research papers. We identify these journals by looking for trends in citation data provided by Clarivate — for example, journals in which manuscript references are highly skewed to one journal issue or articles authored by a few researchers. Next year, we plan to investigate new forms of citation manipulation.
Our work seems to have an impact on publishers. Many publishers have thanked us for alerting them to the issues in their journals, and some have initiated their own investigations. One example from this year is the open-access publisher MDPI, based in Basel, Switzerland, which we informed that four of its journals would be included in our list because of citation manipulation. Perhaps it is unrelated, but on 13 February, MDPI sent out a notice that it was looking into potential reviewer misconduct involving unethical citation practices in 23 of its journals.
You also flag journals that publish a high proportion of papers from Chinese researchers. Why is this a concern?
This is not a criterion we use on its own. These journals publish — sometimes almost exclusively — articles by Chinese researchers, charge unreasonably high article processing fees and have a low citation impact. From a Chinese perspective, this is a concern because we are a developing country and want to make good use of our research funding to publish our work in truly international journals to contribute to global science. If scientists publish in journals where almost all the manuscripts come from Chinese researchers, our administrators will suggest that instead the work should be submitted to a local journal. That way, Chinese researchers can read it and learn from it quickly and don’t need to pay so much to publish it. This is a challenge that the Chinese research community has been confronting in recent years.
How do you determine whether a journal has a paper-mill problem?
My team collects information posted on social media as well as websites such as PubPeer, where users discuss published articles, and the research-integrity blog For Better Science. We currently don’t do the image or text checks ourselves, but we might start to do so later.
My team has also created an online database of questionable articles called Amend, which researchers can access. We collect information on article retractions, notices of concern, corrections and articles that have been flagged on social media.
Source: Early Warning Journal List
What impact has the list had on research in China?
This list has benefited the Chinese research community. Most Chinese research institutes and universities reference our list, but they can also develop their own versions. Every year, we receive criticisms from some researchers for including journals that they publish in. But we also receive a lot of support from those who agree that the journals included on the list are of low quality, which hurts the Chinese research ecosystem.
There have been a lot of retractions from China in journals on our list. And once a journal makes it on to the list, submissions from Chinese researchers typically drop (see ‘Marked down’). This explains why many journals on our list are excluded the following year — this is not a cumulative list.
This interview has been edited for length and clarity.