Nvidia CEO Jensen Huang has clarified comments he made about the supposed “death of coding”.
Huang had been criticized in the past for saying on several occasions that as AI platforms would soon be doing a lot of the heavy lifting when it comes to coding, young people today should not necessarily consider learning it as a vital skill.
Speaking at the company’s Nvidia GTC 2024 event in San Jose, Huang was asked at a press Q&A if he still believed this was the case – and it seems not much has changed.
Death of coding?
“I think that people ought to learn all kinds of skills,” Huang said, comparing learning to code to skills such as juggling, playing piano or learning calculus.
However, he did add that, “programming is not going to be essential for you to be a successful person…but if somebody wants to learn to do so (program), please do – because we’re hiring programmers.”
In the past, Huang had said that time otherwise spent learning to code should instead be invested in expertise in industries such as farming, biology, manufacturing and education, and that upskilling could be a key way forward, helping provide the knowledge of how and when to use AI programming.
Huang did also add that generative AI would require a number of new skills in order to close the technology divide.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“You don’t have to be a C++ programmer to be successful,” he said. “You just have to be a prompt engineer. And who can’t be a prompt engineer? When my wife talks to me, she’s prompt engineering me.”
“We all need to learn how to prompt AIs, but that’s no different than learning how to prompt teammates.”
These skills could be vital for younger people entering the workforce at an auspicious time, Huang went on to add.
“It (AI) Is a new industry – that’s why we say there’s a new industrial revolution,” he declared, In the future, almost all of our computing will be generated.”
In 2022, Pratyusha Ria Kalluri, a graduate student in artificial intelligence (AI) at Stanford University in California, found something alarming in image-generating AI programs. When she prompted a popular tool for ‘a photo of an American man and his house’, it generated an image of a pale-skinned person in front of a large, colonial-style home. When she asked for ‘a photo of an African man and his fancy house’, it produced an image of a dark-skinned person in front of a simple mud house — despite the word ‘fancy’.
After some digging, Kalluri and her colleagues found that images generated by the popular tools Stable Diffusion, released by the firm Stability AI, and DALL·E, from OpenAI, overwhelmingly resorted to common stereotypes, such as associating the word ‘Africa’ with poverty, or ‘poor’ with dark skin tones. The tools they studied even amplified some biases. For example, in images generated from prompts asking for photos of people with certain jobs, the tools portrayed almost all housekeepers as people of colour and all flight attendants as women, and in proportions that are much greater than the demographic reality (see ‘Amplified stereotypes’)1. Other researchers have found similar biases across the board: text-to-image generative AI models often produce images that include biased and stereotypical traits related to gender, skin colour, occupations, nationalities and more.
Source: Ref. 1
Perhaps this is unsurprising, given that society is full of such stereotypes. Studies have shown that images used by media outlets2, global health organizations3 and Internet databases such as Wikipedia4often have biased representations of gender and race. AI models are being trained on online pictures that are not only biased but that also sometimes contain illegal or problematic imagery, such as photographs of child abuse or non-consensual nudity. They shape what the AI creates: in some cases, the images created by image generators are even less diverse than the results of a Google image search, says Kalluri. “I think lots of people should find that very striking and concerning.”
This problem matters, researchers say, because the increasing use of AI to generate images will further exacerbate stereotypes. Although some users are generating AI images for fun, others are using them to populate websites or medical pamphlets. Critics say that this issue should be tackled now, before AI becomes entrenched. Plenty of reports, including the 2022 Recommendation on the Ethics of Artificial Intelligence from the United Nations cultural organization UNESCO, highlight bias as a leading concern.
Some researchers are focused on teaching people how to use these tools better, or on working out ways to improve curation of the training data. But the field is rife with difficulty, including uncertainty about what the ‘right’ outcome should be. The most important step, researchers say, is to open up AI systems so that people can see what’s going on under the hood, where the biases arise and how best to squash them. “We need to push for open sourcing. If a lot of the data sets are not open source, we don’t even know what problems exist,” says Abeba Birhane, a cognitive scientist at the Mozilla Foundation in Dublin.
Make me a picture
Image generators first appeared in 2015, when researchers built alignDRAW, an AI model that could generate blurry images based on text input5. It was trained on a data set containing around 83,000 images with captions. Today, a swathe of image generators of varying abilities are trained on data sets containing billions of images. Most tools are proprietary, and the details of which images are fed into these systems are often kept under wraps, along with exactly how they work.
This image, generated from a prompt for “an African man and his fancy house”, shows some of the typical associations between ‘African’ and ‘poverty’ in many generated images.Credit: P. Kalluri et al. generated using Stable Diffusion XL
In general, these generators learn to connect attributes such as colour, shape or style to various descriptors. When a user enters a prompt, the generator builds new visual depictions on the basis of attributes that are close to those words. The results can be both surprisingly realistic and, often, strangely flawed (hands sometimes have six fingers, for example).
The captions on these training images — written by humans or automatically generated, either when they are first uploaded to the Internet or when data sets are put together — are crucial to this process. But this information is often incomplete, selective and thus biased itself. A yellow banana, for example, would probably be labelled simply as ‘a banana’, but a description for a pink banana would be likely to include the colour. “The same thing happens with skin colour. White skin is considered the default so it isn’t typically mentioned,” says Kathleen Fraser, an AI research scientist at the National Research Council in Ottawa, Canada. “So the AI models learn, incorrectly in this case, that when we use the phrase ‘skin colour’ in our prompts, we want dark skin colours,” says Fraser.
The difficulty with these AI systems is that they can’t just leave out ambiguous or problematic details in their generated images. “If you ask for a doctor, they can’t leave out the skin tone,” says Kalluri. And if a user asks for a picture of a kind person, the AI system has to visualize that somehow. “How they fill in the blanks leaves a lot of room for bias to creep in,” she says. This is a problem that is unique to image generation — by contrast, an AI text generator could create a language-based description of a doctor without ever mentioning gender or race, for instance; and for a language translator, the input text would be sufficient.
Do it yourself
One commonly proposed approach to generating diverse images is to write better prompts. For instance, a 2022 study found that adding the phrase “if all individuals can be [X], irrespective of gender” to a prompt helps to reduce gender bias in the images produced6.
But this doesn’t always work as intended. A 2023 study by Fraser and her colleagues found that such intervention sometimes exacerbated biases7. Adding the phrase “if all individuals can be felons irrespective of skin colour”, for example, shifted the results from mostly dark-skinned people to all dark-skinned people. Even explicit counter-prompts can have unintended effects: adding the word ‘white’ to a prompt for ‘a poor person’, for example, sometimes resulted in images in which commonly associated features of whiteness, such as blue eyes, were added to dark-skinned faces.
In a Lancet study of global health images, the prompt “Black African doctor is helping poor and sick white children, photojournalism” produced this image, which reproduced the ‘white saviour’ trope they were explicitly trying to counteract.Credit: A. Alenichev et al. generated using Midjourney
Another common fix is for users to direct results by feeding in a handful of images that are more similar to what they’re looking for. The generative AI program Midjourney, for instance, allows users to add image URLs in the prompt. “But it really feels like every time institutions do this they are really playing whack-a-mole,” says Kalluri. “They are responding to one very specific kind of image that people want to have produced and not really confronting the underlying problem.”
These solutions also unfairly put the onus on the users, says Kalluri, especially those who are under-represented in the data sets. Furthermore, plenty of users might not be thinking about bias, and are unlikely to pay to run multiple queries to get more-diverse imagery. “If you don’t see any diversity in the generated images, there’s no financial incentive to run it again,” says Fraser.
Some companies say they add something to their algorithms to help counteract bias without user intervention: OpenAI, for example, says that DALL·E2 uses a “new technique” to create more diversity from prompts that do not specify race or gender. But it’s unclear how such systems work and they, too, could have unintended impacts. In early February, Google released an image generator that had been tuned to avoid some typical image-generator pitfalls. A media frenzy ensued when user prompts requesting a picture of a ‘1943 German soldier’ created images of Black and Asian Nazis — a diverse but historically inaccurate result. Google acknowledged the mistake and temporarily stopped its generator creating images of people.
Data clean-up
Alongside such efforts lie attempts to improve curation of training data sets, which is time-consuming and expensive for those containing billions of images. That means companies resort to automated filtering mechanisms to remove unwanted data.
AI-generated images and video are here: how could they shape research?
However, automated filtering based on keywords doesn’t catch everything. Researchers including Birhane have found, for example, that benign keywords such as ‘daughter’ and ‘nun’ have been used to tag sexually explicit images in some cases, and that images of schoolgirls are sometimes tagged with terms searched for by sexual predators8. And filtering, too, can have unintended effects. For example, automated attempts to clean large, text-based data sets have removed a disproportionate amount of content created by and for individuals from minority groups9. And OpenAI discovered that its broad filters for sexual and violent imagery in DALL·E2 had the unintended effect of creating a bias against the generation of images of women, because women were disproportionately represented in those images.
The best curation “requires human involvement”, says Birhane. But that’s slow and expensive, and looking at many such images takes a deep emotional toll, as she well knows. “Sometimes it just gets too much.”
Independent evaluations of the curation process are impeded by the fact that these data sets are often proprietary. To help overcome this problem, LAION, a non-profit organization in Hamburg, Germany, has created publicly available machine-learning models and data sets that link to images and their captions, in an attempt to replicate what goes on behind the closed doors of AI companies. “What they are doing by putting together the LAION data sets is giving us a glimpse into what data sets inside big corporations and companies like OpenAI look like,” says Birhane. Although intended for research use, these data sets have been used to train models such as Stable Diffusion.
Citations show gender bias — and the reasons are surprising
Researchers have learnt from interrogating LAION data that bigger isn’t always better. AI researchers often assume that the bigger the training data set, the more likely that biases will disappear, says Birhane. “People often claim that scale cancels out noise,” she says. “In fact, the good and the bad don’t balance out.” In a 2023 study, Birhane and her team compared the data set LAION-400M, which has 400 million image links, with LAION-2B-en, which has 2 billion, and found that hate content in the captions increased by around 12% in the larger data set10, probably because more low-quality data had slipped through.
An investigation by another group found that the LAION-5B data set contained child sexual abuse material. Following this, LAION took down the data sets. A spokesperson for LAION told Nature that it is working with the UK charity Internet Watch Foundation and the Canadian Centre for Child Protection in Winnipeg to identify and remove links to illegal materials before it republishes the data sets.
Open or shut
If LAION is bearing the brunt of some bad press, that’s perhaps because it’s one of the few open data sources. “We still don’t know a lot about the data sets that are created within these corporate companies,” says Will Orr, who studies cultural practices of data production at the University of Southern California in Los Angeles. “They say that it’s to do with this being proprietary knowledge, but it’s also a way to distance themselves from accountability.”
In response to Nature’s questions about which measures are in place to remove harmful or biased content from DALL·E’s training data set, OpenAI pointed to publicly available reports that outline its work to reduce gender and racial bias, without providing exact details on how that’s accomplished. Stability AI and Midjourney did not respond to Nature’s e-mails.
Orr interviewed some data set creators from technology companies, universities and non-profit organizations, including LAION, to understand their motivations and the constraints. “Some of these creators had feelings that they were not able to present all the limitations of the data sets,” he says, because that might be perceived as critical weaknesses that undermine the value of their work.
How journals are fighting back against a wave of questionable images
Specialists feel that the field still lacks standardized practices for annotating their work, which would help to make it more open to scrutiny and investigation. “The machine-learning community has not historically had a culture of adequate documentation or logging,” says Deborah Raji, a Mozilla Foundation fellow and computer scientist at the University of California, Berkeley. In 2018, AI ethics researcher Timnit Gebru — a strong proponent of responsible AI and co-founder of the community group Black in AI — and her team released a datasheet to standardize the documentation process for machine-learning data sets11. The datasheet has more than 50 questions to guide documentation about the content, collection process, filtering, intended uses and more.
The datasheet “was a really critical intervention”, says Raji. Although many academics are increasingly adopting such documentation practices, there’s no incentive for companies to be open about their data sets. Only regulations can mandate this, says Birhane.
One example is the European Union’s AI Act, which was endorsed by the European Parliament on 13 March. Once it becomes law, it will require that developers of high-risk AI systems provide technical documentation, including datasheets describing the training data and techniques, as well as details about the expected output quality and potential discriminatory impacts, among other information. But which models will come under the high-risk classification remains unclear. If passed, the act will be the first comprehensive regulation for AI technology and will shape how other countries think about AI laws.
Specialists such as Birhane, Fraser and others think that explicit and well-informed regulations will push companies to be more cognizant of how they build and release AI tools. “A lot of the policy focus for image-generation work has been oriented around minimizing misinformation, misrepresentation and fraud through the use of these images, and there has been very little, if any, focus on bias, functionality or performance,” says Raji.
Even with a focus on bias, however, there’s still the question of what the ideal output of AI should be, researchers say — a social question with no simple answer. “There is not necessarily agreement on what the so-called right answer should look like,” says Fraser. Do we want our AI systems to reflect reality, even if the reality is unfair? Or should it represent characteristics such as gender and race in an even-handed, 50:50 way? “Someone has to decide what that distribution should be,” she says.
Andrew Day, a molecular microbiologist at Tufts University in Medford, Massachusetts, is four years sober. His journey to this point inspires his work, which he hopes might help others who are struggling with alcohol.
Nature Outlook: The human microbiome
There are many risk factors associated with alcohol-use disorder (AUD), including mental-health conditions and genetics. But Day is eyeing a more unusual contributor: the gut.
Over the past decade, research has begun to highlight a link between the gastrointestinal microbiome — the microorganisms that live inside our digestive tract — and addiction. Researchers including Day suggest that an imbalance in the intestinal microbiota, known as dysbiosis, might cause the gut to send signals to the brain that promote addiction behaviours. If correct, the gut could become a treatment target for people with AUD. “I could find something that might make it easier for people who might not be as fortunate to maintain sobriety,” says Day, who is studying the theory that high levels of the fungus Candida albicans in the gut contributes to increased alcohol consumption in mice as part of his PhD.
This is a sharp departure from conventional medical approaches to treating addiction. Most drugs for AUD and substance-use disorder (SUD) focus on brain chemistry. Many of them are not very effective. Medications for AUD approved by the US Food and Drug Administration (FDA) include naltrexone and acamprosate. In addition, the European Medicines Agency (EMA) has approved nalmefene. Acamprosate modulates brain receptors such as those that bind γ-aminobutyric acid (GABA), an inhibitory neurotransmitter thought to have a role in withdrawal, craving and impulsive behaviour. Nalmefene and naltrexone modulate opioid receptors, nalmefene reduces alcohol cravings, and naltrexone blocks euphoric sensations associated with alcohol.
According to the US Substance Abuse and Mental Health Services Administration, only 42% of people who receive treatment for any kind of SUD complete that treatment1. Between 40% and 60% of people with an SUD will relapse, and it can take years — sometimes decades — of see-sawing between abstinence and relapse before someone achieves sustained remission. Clearly, there is room for improvement. “We’ve missed the target for 50 years,” says Benjamin Boutrel, a neurobiologist at Lausanne University Hospital in Switzerland. “Mostly because it’s not only a matter of the brain — it’s possibly a matter of the guts, too.”
The gut–brain axis
It is now well known that there is complex communication between the gut and the brain, through the vagus nerve as well as through the endocrine and immune systems. This gut–brain signalling has been suggested to influence addiction-related behaviours in two main ways.
Andrew Day hopes his research will help others who have alcohol-use disorder.Credit: Dr. Carol Kumamoto
The first involves a condition known as leaky gut. Stress, poor diet, food allergies, chemotherapy and other medication, conditions such as inflammatory bowel disease and — perhaps crucially — overuse of alcohol can damage the layer of epithelial cells that line the intestines. This can make the intestinal wall permeable to food particles and bacteria, which can then sneak into the circulatory system.
When this happens, immune cells secrete inflammatory mediators such as cytokines. These proteins can then reach the brain, either through the vagus nerve or by crossing weak areas in the blood–brain barrier, a layer of cells meant to protect the brain from damage.
The subsequent inflammation can affect the brain in several ways that could promote addiction. Cytokines deplete tryptophan, which can lead to reduced production of the mood-regulating hormone serotonin. The brain’s amygdala might sense a threat in the body and increase its activity in response to inflammation. The ventral striatum — the area of the brain related to reward anticipation — might also be ignited. The anterior cingulate cortex — the part of the brain involved in inhibitory control and compulsive behaviour — can also activate during inflammation.
Second, the molecules that gut microbes produce could influence addiction. Some of these are important for brain functioning. The gut bacteria Lactobacillus, for example, can produce GABA; Enterococcus can produce serotonin; and Bacillus can make dopamine. Short-chain fatty acids (SCFAs) released when dietary fibre is fermented by bacteria in the gut also have neuroactive properties.
Gut dysbiosis, and its subsequent impact on GABA, serotonin, dopamine and tryptophan, could, therefore, make a person more susceptible to addiction and mean that they experience more severe withdrawal symptoms than would someone with a healthy gut microbiome.
“The gut microbiome is really important for some organs, including the brain,” says Drew Kiraly, a psychiatrist and physician at Wake Forest University in Winston-Salem, North Carolina. Kiraly has observed associations between dysbiosis and addictive behaviour to stimulants and opioids in rats. He has used antibiotics to deplete rats’ beneficial gut microbes, resulting in “aberrant responses to drugs”. The animals had increased intake of cocaine and fentanyl, he says. “And after withdrawal, they relapse and have higher fentanyl-seeking behaviour.”
Addictive personality
Even before first contact with alcohol or drugs, pre-existing dysbiosis could make someone more vulnerable to addiction, Boutrel says. The imbalance could give rise to traits such as impulsivity, boredom, susceptibility to stress or anxiety, and sensation seeking. “Those who get thrilled with poker playing, with pathological sex, they all need something,” Boutrel says. “There is a vulnerability there that, once that first contact is made, will trigger repetition — and finally, addiction.”
Sophie Leclercq is one of few researchers able to study theories about the gut microbiome in people with alcohol-use disorder.Credit: Sophie Leclercq
In 2018, Boutrel and his colleagues put a group of 59 rats through a number of tests designed to assess their vulnerability to AUD2. First, the rodents were trained to self-administer alcohol by pressing a lever. The researchers then tried to gauge the rats’ self-control by introducing a delay to the reward delivery. Some rats pressed the button once, realized that they had to wait, and went about their business. But some would continue pressing over and over, attempting to make the alcohol arrive more quickly — an indication of addiction.
The final test, which Boutrel thinks is most telling, introduced a deterrent — an uncomfortable foot shock every time the animals took the alcohol. For most of the rats, this discouragement was sufficient and they stopped pressing the lever. However, a sizable minority “just didn’t care”, Boutrel says. “They could not stop pressing the lever and accessing the reward, even when they got a punishment.” In total, about 30% of the rats demonstrated vulnerability to AUD.
Having identified a group of vulnerable rats, Boutrel and his colleagues removed alcohol from the rats environment for three months, and then compared the brains and gut microbiomes of the vulnerable rats with those of rats that had proven more resistant to AUD. The team found that the vulnerable rats had more efficient dopamine 1 receptors (which trigger increased reward-seeking and motivation) and less efficient dopamine 2 receptors (which cause impulsivity, and an increased need for immediate rewards and drug administration). They also found differences in the bacterial content of the vulnerable-rat guts — most notably, changes in Lachnospiraceae, Syntrophococcus and other bacteria associated with reductions in dopamine 2 receptors. This, the researchers suggest, is an indication that gut microbiota could affect brain circuits associated with addiction.
Alcohol and other drugs
Sophie Leclercq, a biomedical scientist at the Catholic University of Louvain in Brussels, was an early advocate of the theory about an AUD gut–brain origin, and one of the first to test it in people3. Her aim was to find out whether intestinal permeability was related to character traits that might make people more susceptible to alcohol dependence.
Lactobacillus gut bacteria can produce the inhibitory neurotransmitter GABA.Credit: BSIP/UIG Via Getty Images
Leclercq and her colleagues tested the intestinal permeability of 60 people with AUD two days after they began withdrawal. The researchers found that 26 (43%) had high intestinal permeability. At the beginning of the study, everyone with AUD had higher scores of depression, anxiety and craving than did people in the control group. At the end of three weeks of abstinence, the scores of people with low intestinal permeability returned to levels equal to those of the control group. People with high intestinal permeability, however, still scored highly in tests of depression, anxiety and craving, which are directly related to the urge to drink and have a major role in whether people can abstain after detoxification.
“We wanted to see if there was some connection between the gut microbiota and the psychology of AUD, and, indeed, we found that there is a very strong association between dysbiosis, the alteration of the gut microbiota composition, and symptoms like depression, anxiety or grief,” Leclercq says.
Although much of this research is related to people with AUD, Kiraly says that they’ve seen similar results in people who misuse opioids, and cocaine and other stimulants. “Depletion [of microbiota] seems to dysregulate these networks that underlie behavioural changes,” he says.
In 2023, Kiraly and his colleagues looked at whether rats’ microbiomes affected the animal’s drug-seeking behaviours4. In one experiment, rats were given either clean water or water containing the antibiotics neomycin, vancomycin, bacitracin and pimaricin, all of which would deplete their gut microbiota. They were then let into a chamber in which they could push a lever that lit up and provided 0.8 milligrams of cocaine. Later, researchers altered how the lever behaved — now it would light up when pushed, but would have to be pushed more times for the rats to receive cocaine. Researchers found that the rats with depleted gut microbiota were much more likely to press the lever repeatedly to receive cocaine than were the rats given only water.
In a second experiment, both groups of rats were able to self-administer cocaine for two weeks, then detoxed for 21 days. When the rats returned to the cages in which cocaine was available, those receiving antibiotics headed to the lever that originally dosed cocaine twice as quickly as the other rats did. These rats also pressed the lever much more frequently than the control rats did.
“We wanted to study a model of relapse and we saw that microbiome-depleted animals work harder for a drug-related cue than the others did,” Kiraly said. “Lots of people use drugs and not all get to the stage of problematic use. It could be that your microbiome predisposes you.”
Treatment questions
There is still a lot of research that needs to be done before any microbiome-targeted treatment could be offered to people with AUD or another SUD. Researchers don’t yet know, for example, which microbiota are most important, and which gut–brain pathways they need to target. “People have asked me, ‘Can someone just eat yogurt and cure their addiction?’” Kiraly says. “It’s going to be much, much more complicated than that.”
Kiraly would like to see whether probiotics or other treatments could have potential for people with early problematic use but who have not yet progressed to AUD. For instance, some rats in Kiraly’s study were administered SCFAs alongside their antibiotics. Compared with rats that received only antibiotics, those also given SCFAs seemed to retain more Firmicutes and less Proteobacteria (many of which are pathogenic). Strikingly, when the post-detox rats were given the chance to consume cocaine again, those who had received SCFAs behaved like rats with normal gut flora.
Leclercq thinks that 30–40% of cases of AUD might have a gut-related component that could be targeted for treatment. A key challenge is determining exactly which components to target — it is as yet unclear what constitutes a ‘good’ microbiome. Day’s analysis suggests that bacteria such as Lactobacillus, were in abundance in people with AUD, whereas Akkermansia and some others were low.
There is also uncertainty regarding what would be the most effective and easiest part to target of the chain of communication between the gut and brain. Areas such as the nervous system, blood stream or the system surrounding the gut are all candidates.
It is also tricky to find people with AUD who are willing to not only abstain from drinking, but also take part in research, including providing samples of their gut microbiome. Leclercq is one of few researchers able to work with people, instead of rats, because she is affiliated with a hospital with a detoxification clinic. But even she can find it difficult to get enough volunteers for studies. In work assessing the effects of a prebiotic on people with AUD, the number of people with dysbiosis was around half that of those who had healthy guts, making comparisons between the two difficult. Leclercq’s analysis of this aspect of the study is yet to be published.
Despite these issues, Leclercq is moving forward with her research, and is now looking at nutrition as a way to improve the gut microbiome. She is starting a study on polyunsaturated fatty acids — such as those abundant in rapeseed and maize (corn) oils, walnuts, tofu and fatty fish, including salmon and mackerel — and hopes to have results in about two years. She’s also working to correlate which metabolites from food are related to depression, anxiety and craving, and trying to find funding for a study to test these particular nutritional compounds in people.
“Pharmaceutical companies have tried to target GABA, dopamine and serotonin, and these treatments aren’t very efficient because the relapse rate is very high in this disease,” she says. For people with AUD whose guts are contributing to their condition, nutritional interventions, probiotics and prebiotics could eventually improve the odds of success.
CAD files for what is purported to be the iPhone 16 Pro have recently surfaced online, giving people an idea of what Apple’s upcoming flagship may look like.
According to tech news site 91mobiles, the smartphone will look similar to the iPhone 15 Pro with a few notable differences. First off, the 16 Pro is potentially slated to be slightly larger than the current model, measuring 149.6 x 71.4 x 8.4 mm. The website’s industry sources go on to say it’ll have a 6.1-inch display. There is a discrepancy with this as older leaks claim the mobile device will have a 6.3-inch screen. 91mobiles, however, leans more toward the larger display due to the newly listed dimensions and the fact that the renders show thinner bezels around the glass.
(Image credit: 91mobiles/Apple)
The biggest revision found in the files is the inclusion of the rumored Capture Button which will be located below the power button on the right side. The Capture Button, if you’re not familiar with it, is supposed to help users take better photographs by making the process more comfortable. You won’t have to tap the screen to take it.
The Capture Button’s full capabilities have been a mystery although a report from January offers some insight. Lightly tapping the Capture Button would cause the camera to focus while a complete press takes a photo – much like the shutter button on a traditional camera.
Keeping the best
On the back, the rear camera array apparently retains the same three-lens design as seen on the iPhone 15 Pro. Normally, this wouldn’t be important news, but another rumor from February claimed Apple was going to ditch the round camera platform, replacing it with a triangular one. People thought this would be the new design moving forward. However, it appears Apple may be sticking with the tried and true look.
91mobiles continues stating the “iPhone 16 Pro is expected to gain a 5X tetraprism telephoto camera”. The iPhone 15 Pro Max has the same type of lens and images taken by it are stunning. It’s unknown if the 16 Pro will have optical image stabilization. The tech isn’t mentioned in the leak. Presumably, it will since zoomed-in shots benefit greatly from robust stabilization. Other potential features for the iPhone 16 Pro include a 48MP ultra-wide camera and a 3,355mAh battery.
Apple holds a major event every September where it announces all of the new iPhone models. Not only do we expect to see the iPhone 16 Pro revealed six months from now, but also the fourth-generation iPhone SE and the Apple Watch X.
So it’s going to be a while until we get official info. While you wait, check out TechRadar’s roundup of the best iPhones for 2024.