Categories
Life Style

AI-fuelled election campaigns are here

[ad_1]

Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

A woman wearing traditional Chinese clothing is playing Go with a robot at the Mobile World Congress, 2024.

A woman plays Go with an AI-powered robot developed by the firm SenseTime, based in Hong Kong.Credit: Joan Cros/NurPhoto via Getty

AI systems can now nearly match (and sometimes exceed) human performance in tasks such as reading comprehension, image classification and mathematics. “The pace of gain has been startlingly rapid,” says social scientist Nestor Maslej, editor-in-chief of the annual AI Index. The report calls for new benchmarks to assess algorithms’ capabilities and highlights the need for a consensus on what ethical AI models would look like. The report also finds that much of the cutting-edge work is being done in industry: companies produced 51 notable AI systems in 2023, with 15 coming from academic research.

Nature | 6 min read

Reference: 2024 AI Index report

Speedy advances: Line chart showing the performance of AI systems on certain benchmark tests compared to humans since 2012.

Source: Artificial Intelligence Index Report 2024.

An experimental AI system called the Dust Watcher can predict the timing and severity of an incoming dust storm up to 12 hours in advance. Harmful and damaging dust storms — like the ones that battered Beijing over the weekend — sweep many Asian countries every year. A separate storm predictor, called Dust Assimilation and Prediction System, dynamically integrates observational data with the calculations of its model to generate a 48-hour forecast. “It almost acts like an autopilot for the model,” says atmospheric scientist Jin Jianbing.

Nature | 6 min read

A study identified buzzwords that could be considered typical of AI-generated text in up to 17% of peer-review reports for papers submitted to four conferences. The buzzwords include adjectives such as ‘commendable’ and ‘versatile’ that seem to be disproportionately used by chatbots. It’s unclear whether researchers used the tools to construct their reviews from scratch or just to edit and improve written drafts. “It seems like when people have a lack of time, they tend to use ChatGPT,” says study co-author and computer scientist Weixin Liang.

Nature | 5 min read

Reference: arXiv preprint (not peer reviewed)

An AI model learnt how to influence human gaming partners by watching people play a collaborative video game based on a 2D virtual kitchen. The algorithm was incentivised to gain points by following certain rules that weren’t known to the human collaborator. To get the person to play in a way that would maximize points, the system would, for example, repeatedly block their path. “This type of approach could be helpful in supporting people to reach their goals when they don’t know the best way to do this,” says computer scientist Emma Brunskill. At the same time, people would need to be able to decide what types of influences they are OK with, says computer scientist Micah Carroll.

Science News | 9 min read

Reference: NeurIPS Proceedings paper

Image of the week

Version 1.0 of JPL’s EELS robot raises its head from the icy surface of Athabasca Glacier in Alberta, Canada, during field testing in September 2023.

NASA/JPL-Caltech

This snake robot, photographed here during tests on a Canadian glacier, could one day explore the icy surface and hydrothermal vents of Saturn’s moon Enceladus. Spiral segments propel the more than four-metre-long Exobiology Extant Life Surveyor (EELS) forward while sensors in its ‘head’ capture information. “There are dozens of textbooks about how to design a four-wheel vehicle, but there is no textbook about how to design an autonomous snake robot to boldly go where no robot has gone before,” says roboticist and EELS co-creator Hiro Ono. “We have to write our own.” (Astronomy | 4 min read)

Reference: Science Robotics paper

Features & opinion

AI systems can help researchers to understand how genetic differences affect people’s responses to drugs. Yet most genetic and clinical data comes from the global north, which can put the health of Africans at risk, writes a group of drug-discovery researchers. They suggest that AI models trained on huge amounts of data can be fine-tuned with information specific to African populations — an approach called transfer learning. The important thing is that scientists in Africa lead the way on these efforts, the group says.

Nature | 10 min read

Malicious deepfakes aren’t the only thing we should be concerned about when it comes to content that can affect the integrity of elections, says US Science Envoy for AI Rumman Chowdhury. Political candidates are increasingly using ‘softfakes’ to boost their campaigns — obviously AI-generated video, audio, images or articles that aim to whitewash a candidate’s reputation and make them more likeable. Social media companies and media outlets need to have clear policies on softfakes, Chowdhury says, and election regulators should take a close look.

Nature | 5 min read

AI models trained on a deceased person’s emails, text messages or voice recordings can let people chat with a simulation of their lost loved ones. “But we’re missing evidence that this technology actually helps the bereaved cope with loss,” says cognitive scientist Tim Reinboth. One small study showed that people used griefbots as a short-term tool to overcome the initial emotional upheaval. Some researchers who study grief worry that an AI illusion could make it harder for mourners to accept the loss. Even if these systems turn out to do more harm than good, there is little regulatory oversight to shut them down.

Undark | 6 min read

Reference: CHI Proceedings paper

Quote of the day

Computer scientist Jennifer Mankoff says that generative artificial intelligence (GAI) can make both tedious and creative tasks more accessible to researchers with disabilities or chronic illnesses. (Nature | 9 min read)

[ad_2]

Source Article Link

Categories
Life Style

AI-fuelled election campaigns are here — where are the rules?

[ad_1]

Of the nearly two billion people living in countries that are holding elections this year, some have already cast their ballots. Elections held in Indonesia and Pakistan in February, among other countries, offer an early glimpse of what’s in store as artificial intelligence (AI) technologies steadily intrude into the electoral arena. The emerging picture is deeply worrying, and the concerns are much broader than just misinformation or the proliferation of fake news.

As the former director of the Machine Learning, Ethics, Transparency and Accountability (META) team at Twitter (before it became X), I can attest to the massive ongoing efforts to identify and halt election-related disinformation enabled by generative AI (GAI). But uses of AI by politicians and political parties for purposes that are not overtly malicious also raise deep ethical concerns.

GAI is ushering in an era of ‘softfakes’. These are images, videos or audio clips that are doctored to make a political candidate seem more appealing. Whereas deepfakes (digitally altered visual media) and cheap fakes (low-quality altered media) are associated with malicious actors, softfakes are often made by the candidate’s campaign team itself.

In Indonesia’s presidential election, for example, winning candidate Prabowo Subianto relied heavily on GAI, creating and promoting cartoonish avatars to rebrand himself as gemoy, which means ‘cute and cuddly’. This AI-powered makeover was part of a broader attempt to appeal to younger voters and displace allegations linking him to human-rights abuses during his stint as a high-ranking army officer. The BBC dubbed him “Indonesia’s ‘cuddly grandpa’ with a bloody past”. Furthermore, clever use of deepfakes, including an AI ‘get out the vote’ virtual resurrection of Indonesia’s deceased former president Suharto by a group backing Subianto, is thought by some to have contributed to his surprising win.

Nighat Dad, the founder of the research and advocacy organization Digital Rights Foundation, based in Lahore, Pakistan, documented how candidates in Bangladesh and Pakistan used GAI in their campaigns, including AI-written articles penned under the candidate’s name. South and southeast Asian elections have been flooded with deepfake videos of candidates speaking in numerous languages, singing nostalgic songs and more — humanizing them in a way that the candidates themselves couldn’t do in reality.

What should be done? Global guidelines might be considered around the appropriate use of GAI in elections, but what should they be? There have already been some attempts. The US Federal Communications Commission, for instance, banned the use of AI-generated voices in phone calls, known as robocalls. Businesses such as Meta have launched watermarks — a label or embedded code added to an image or video — to flag manipulated media.

But these are blunt and often voluntary measures. Rules need to be put in place all along the communications pipeline — from the companies that generate AI content to the social-media platforms that distribute them.

Content-generation companies should take a closer look at defining how watermarks should be used. Watermarking can be as obvious as a stamp, or as complex as embedded metadata to be picked up by content distributors.

Companies that distribute content should put in place systems and resources to monitor not just misinformation, but also election-destabilizing softfakes that are released through official, candidate-endorsed channels. When candidates don’t adhere to watermarking — none of these practices are yet mandatory — social-media companies can flag and provide appropriate alerts to viewers. Media outlets can and should have clear policies on softfakes. They might, for example, allow a deepfake in which a victory speech is translated to multiple languages, but disallow deepfakes of deceased politicians supporting candidates.

Election regulatory and government bodies should closely examine the rise of companies that are engaging in the development of fake media. Text-to-speech and voice-emulation software from Eleven Labs, an AI company based in New York City, was deployed to generate robocalls that tried to dissuade voters from voting for US President Joe Biden in the New Hampshire primary elections in January, and to create the softfakes of former Pakistani prime minister Imran Khan during his 2024 campaign outreach from a prison cell. Rather than pass softfake regulation on companies, which could stifle allowable uses such as parody, I instead suggest establishing election standards on GAI use. There is a long history of laws that limit when, how and where candidates can campaign, and what they are allowed to say.

Citizens have a part to play as well. We all know that you cannot trust what you read on the Internet. Now, we must develop the reflexes to not only spot altered media, but also to avoid the emotional urge to think that candidates’ softfakes are ‘funny’ or ‘cute’. The intent of these isn’t to lie to you — they are often obviously AI generated. The goal is to make the candidate likeable.

Softfakes are already swaying elections in some of the largest democracies in the world. We would be wise to learn and adapt as the ongoing year of democracy, with some 70 elections, unfolds over the next few months.

Competing Interests

The author declares no competing interests.

[ad_2]

Source Article Link