Categories
Entertainment

The European Union is investigating Meta’s election policies

[ad_1]

The EU has officially opened a significant investigation into Meta for its alleged failures to remove election disinformation. While the European Commission’s statement doesn’t explicitly mention Russia, Meta confirmed to Engadget the EU probe targets the country’s Doppelganger campaign, an online disinformation operation pushing pro-Kremlin propaganda.

Bloomberg’s sources also said the probe was focused on the Russian disinformation operation, describing it as a series of “attempts to replicate the appearance of traditional news sources while churning out content that is favorable to Russian President Vladimir Putin’s policies.”

The investigation comes a day after France said 27 of the EU’s 29 member states had been targeted by pro-Russian online propaganda ahead of European parliamentary elections in June. On Monday, France’s Ministry of Foreign Affairs Jean-Noel Barrot urged social platforms to block websites “participating in a foreign interference operation.”

A Meta spokesperson told Engadget that the company had been at the forefront of exposing Russia’s Doppelganger campaign, first spotlighting it in 2022. The company said it has since investigated, disrupted and blocked tens of thousands of the network’s assets. The Facebook and Instagram owner says it remains on high alerts to monitor the network while claiming Doppelganger has struggled to successfully build organic audiences for the pro-Putin fake news.

Mark Zuckerberg onstage during a company keynote presentation. Profile view from his left side.Mark Zuckerberg onstage during a company keynote presentation. Profile view from his left side.

Meta

The European Commission’s President said Meta’s platforms, Facebook and Instagram, may have breached the Digital Services Act (DSA), the landmark legislation passed in 2022 that empowers the EU to regulate social platforms. The law allows the EC to, if necessary, impose heavy fines on violating companies — up to six percent of a company’s global annual turnover, potentially changing how social companies operate.

In a statement to Engadget, Meta said, “We have a well-established process for identifying and mitigating risks on our platforms. We look forward to continuing our cooperation with the European Commission and providing them with further details of this work.”

The EC probe will cover “Meta’s policies and practices relating to deceptive advertising and political content on its services.” It also addresses “the non-availability of an effective third-party real-time civic discourse and election-monitoring tool ahead of the elections to the European Parliament.”

The latter refers to Meta’s deprecation of its CrowdTangle tool, which researchers and fact-checkers used for years to study how content spreads across Facebook and Instagram. Dozens of groups signed an open letter last month, saying Meta’s planned shutdown during the crucial 2024 global elections poses a “direct threat” to global election integrity.

Meta told Engadget that CrowdTangle only provides a fraction of the publicly available data and would be lacking as a full-fledged election monitoring tool. The company says it’s building new tools on its platform to provide more comprehensive data to researchers and other outside parties. It says it’s currently onboarding key third-party fact-checking partners to help identify misinformation.

However, with Europe’s elections in June and the critical US elections in November, Meta had better get moving on its new API if it wants the tools to work when it matters most.

The EC gave Meta five working days to respond to its concerns before it would consider further escalating the matter. “This Commission has created means to protect European citizens from targeted disinformation and manipulation by third countries,” EC President von der Leyen wrote. “If we suspect a violation of the rules, we act.”

[ad_2]

Source Article Link

Categories
Life Style

AI-fuelled election campaigns are here

[ad_1]

Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

A woman wearing traditional Chinese clothing is playing Go with a robot at the Mobile World Congress, 2024.

A woman plays Go with an AI-powered robot developed by the firm SenseTime, based in Hong Kong.Credit: Joan Cros/NurPhoto via Getty

AI systems can now nearly match (and sometimes exceed) human performance in tasks such as reading comprehension, image classification and mathematics. “The pace of gain has been startlingly rapid,” says social scientist Nestor Maslej, editor-in-chief of the annual AI Index. The report calls for new benchmarks to assess algorithms’ capabilities and highlights the need for a consensus on what ethical AI models would look like. The report also finds that much of the cutting-edge work is being done in industry: companies produced 51 notable AI systems in 2023, with 15 coming from academic research.

Nature | 6 min read

Reference: 2024 AI Index report

Speedy advances: Line chart showing the performance of AI systems on certain benchmark tests compared to humans since 2012.

Source: Artificial Intelligence Index Report 2024.

An experimental AI system called the Dust Watcher can predict the timing and severity of an incoming dust storm up to 12 hours in advance. Harmful and damaging dust storms — like the ones that battered Beijing over the weekend — sweep many Asian countries every year. A separate storm predictor, called Dust Assimilation and Prediction System, dynamically integrates observational data with the calculations of its model to generate a 48-hour forecast. “It almost acts like an autopilot for the model,” says atmospheric scientist Jin Jianbing.

Nature | 6 min read

A study identified buzzwords that could be considered typical of AI-generated text in up to 17% of peer-review reports for papers submitted to four conferences. The buzzwords include adjectives such as ‘commendable’ and ‘versatile’ that seem to be disproportionately used by chatbots. It’s unclear whether researchers used the tools to construct their reviews from scratch or just to edit and improve written drafts. “It seems like when people have a lack of time, they tend to use ChatGPT,” says study co-author and computer scientist Weixin Liang.

Nature | 5 min read

Reference: arXiv preprint (not peer reviewed)

An AI model learnt how to influence human gaming partners by watching people play a collaborative video game based on a 2D virtual kitchen. The algorithm was incentivised to gain points by following certain rules that weren’t known to the human collaborator. To get the person to play in a way that would maximize points, the system would, for example, repeatedly block their path. “This type of approach could be helpful in supporting people to reach their goals when they don’t know the best way to do this,” says computer scientist Emma Brunskill. At the same time, people would need to be able to decide what types of influences they are OK with, says computer scientist Micah Carroll.

Science News | 9 min read

Reference: NeurIPS Proceedings paper

Image of the week

Version 1.0 of JPL’s EELS robot raises its head from the icy surface of Athabasca Glacier in Alberta, Canada, during field testing in September 2023.

NASA/JPL-Caltech

This snake robot, photographed here during tests on a Canadian glacier, could one day explore the icy surface and hydrothermal vents of Saturn’s moon Enceladus. Spiral segments propel the more than four-metre-long Exobiology Extant Life Surveyor (EELS) forward while sensors in its ‘head’ capture information. “There are dozens of textbooks about how to design a four-wheel vehicle, but there is no textbook about how to design an autonomous snake robot to boldly go where no robot has gone before,” says roboticist and EELS co-creator Hiro Ono. “We have to write our own.” (Astronomy | 4 min read)

Reference: Science Robotics paper

Features & opinion

AI systems can help researchers to understand how genetic differences affect people’s responses to drugs. Yet most genetic and clinical data comes from the global north, which can put the health of Africans at risk, writes a group of drug-discovery researchers. They suggest that AI models trained on huge amounts of data can be fine-tuned with information specific to African populations — an approach called transfer learning. The important thing is that scientists in Africa lead the way on these efforts, the group says.

Nature | 10 min read

Malicious deepfakes aren’t the only thing we should be concerned about when it comes to content that can affect the integrity of elections, says US Science Envoy for AI Rumman Chowdhury. Political candidates are increasingly using ‘softfakes’ to boost their campaigns — obviously AI-generated video, audio, images or articles that aim to whitewash a candidate’s reputation and make them more likeable. Social media companies and media outlets need to have clear policies on softfakes, Chowdhury says, and election regulators should take a close look.

Nature | 5 min read

AI models trained on a deceased person’s emails, text messages or voice recordings can let people chat with a simulation of their lost loved ones. “But we’re missing evidence that this technology actually helps the bereaved cope with loss,” says cognitive scientist Tim Reinboth. One small study showed that people used griefbots as a short-term tool to overcome the initial emotional upheaval. Some researchers who study grief worry that an AI illusion could make it harder for mourners to accept the loss. Even if these systems turn out to do more harm than good, there is little regulatory oversight to shut them down.

Undark | 6 min read

Reference: CHI Proceedings paper

Quote of the day

Computer scientist Jennifer Mankoff says that generative artificial intelligence (GAI) can make both tedious and creative tasks more accessible to researchers with disabilities or chronic illnesses. (Nature | 9 min read)

[ad_2]

Source Article Link

Categories
Life Style

AI-fuelled election campaigns are here — where are the rules?

[ad_1]

Of the nearly two billion people living in countries that are holding elections this year, some have already cast their ballots. Elections held in Indonesia and Pakistan in February, among other countries, offer an early glimpse of what’s in store as artificial intelligence (AI) technologies steadily intrude into the electoral arena. The emerging picture is deeply worrying, and the concerns are much broader than just misinformation or the proliferation of fake news.

As the former director of the Machine Learning, Ethics, Transparency and Accountability (META) team at Twitter (before it became X), I can attest to the massive ongoing efforts to identify and halt election-related disinformation enabled by generative AI (GAI). But uses of AI by politicians and political parties for purposes that are not overtly malicious also raise deep ethical concerns.

GAI is ushering in an era of ‘softfakes’. These are images, videos or audio clips that are doctored to make a political candidate seem more appealing. Whereas deepfakes (digitally altered visual media) and cheap fakes (low-quality altered media) are associated with malicious actors, softfakes are often made by the candidate’s campaign team itself.

In Indonesia’s presidential election, for example, winning candidate Prabowo Subianto relied heavily on GAI, creating and promoting cartoonish avatars to rebrand himself as gemoy, which means ‘cute and cuddly’. This AI-powered makeover was part of a broader attempt to appeal to younger voters and displace allegations linking him to human-rights abuses during his stint as a high-ranking army officer. The BBC dubbed him “Indonesia’s ‘cuddly grandpa’ with a bloody past”. Furthermore, clever use of deepfakes, including an AI ‘get out the vote’ virtual resurrection of Indonesia’s deceased former president Suharto by a group backing Subianto, is thought by some to have contributed to his surprising win.

Nighat Dad, the founder of the research and advocacy organization Digital Rights Foundation, based in Lahore, Pakistan, documented how candidates in Bangladesh and Pakistan used GAI in their campaigns, including AI-written articles penned under the candidate’s name. South and southeast Asian elections have been flooded with deepfake videos of candidates speaking in numerous languages, singing nostalgic songs and more — humanizing them in a way that the candidates themselves couldn’t do in reality.

What should be done? Global guidelines might be considered around the appropriate use of GAI in elections, but what should they be? There have already been some attempts. The US Federal Communications Commission, for instance, banned the use of AI-generated voices in phone calls, known as robocalls. Businesses such as Meta have launched watermarks — a label or embedded code added to an image or video — to flag manipulated media.

But these are blunt and often voluntary measures. Rules need to be put in place all along the communications pipeline — from the companies that generate AI content to the social-media platforms that distribute them.

Content-generation companies should take a closer look at defining how watermarks should be used. Watermarking can be as obvious as a stamp, or as complex as embedded metadata to be picked up by content distributors.

Companies that distribute content should put in place systems and resources to monitor not just misinformation, but also election-destabilizing softfakes that are released through official, candidate-endorsed channels. When candidates don’t adhere to watermarking — none of these practices are yet mandatory — social-media companies can flag and provide appropriate alerts to viewers. Media outlets can and should have clear policies on softfakes. They might, for example, allow a deepfake in which a victory speech is translated to multiple languages, but disallow deepfakes of deceased politicians supporting candidates.

Election regulatory and government bodies should closely examine the rise of companies that are engaging in the development of fake media. Text-to-speech and voice-emulation software from Eleven Labs, an AI company based in New York City, was deployed to generate robocalls that tried to dissuade voters from voting for US President Joe Biden in the New Hampshire primary elections in January, and to create the softfakes of former Pakistani prime minister Imran Khan during his 2024 campaign outreach from a prison cell. Rather than pass softfake regulation on companies, which could stifle allowable uses such as parody, I instead suggest establishing election standards on GAI use. There is a long history of laws that limit when, how and where candidates can campaign, and what they are allowed to say.

Citizens have a part to play as well. We all know that you cannot trust what you read on the Internet. Now, we must develop the reflexes to not only spot altered media, but also to avoid the emotional urge to think that candidates’ softfakes are ‘funny’ or ‘cute’. The intent of these isn’t to lie to you — they are often obviously AI generated. The goal is to make the candidate likeable.

Softfakes are already swaying elections in some of the largest democracies in the world. We would be wise to learn and adapt as the ongoing year of democracy, with some 70 elections, unfolds over the next few months.

Competing Interests

The author declares no competing interests.

[ad_2]

Source Article Link

Categories
Life Style

What Putin’s election win means for Russian science

[ad_1]

Vladimir Putin behind a lectern marked with a golden eagle crest.

Vladimir Putin spoke at an event marking the 300th anniversary of the Russian Academy of Sciences.Credit: Getty Images

Russian President Vladimir Putin has secured a fifth term in office, claiming a landslide victory in the country’s presidential election on 18 March. Election officials say he won a record 87% of votes. This outcome came as a surprise to no one, and many international leaders have condemned the vote as not being free or fair.

Researchers interviewed by Nature say that another six years of Putin’s leadership does not bode well for Russian science, which has been shunned globally in response to the country’s ongoing invasion of Ukraine, and is on precarious ground at home. Those still in Russia must choose their words carefully: as one scientist, who wishes to remain anonymous, put it, “business as usual” now includes possible prison time for offhand comments.

Publicly, Putin’s government is a big supporter of research. In early February, at a celebration of the 300-year anniversary of the Russian Academy of Sciences, Putin bolstered the academy’s role, effectively reversing parts of a sweeping reform that limited its autonomy he oversaw in his third term. And at the end of last month, he signed an update to the 2030 national science and technology strategy, which calls for funding for research and development to double to 2% of gross domestic product, and stresses an increased role for applied science amid “sanctions pressure”.

Despite being made before the election, these big announcements were framed not as campaign promises but as top-down directives, says Irina Dezhina, an economist at the Gaidar Institute for Economic Policy in Moscow. “The fact that it was set in motion back then implies no one really expected any changes at the helm.”

Fractured landscape

Although domestic support for Russian science, which remains mostly state-funded, appears to be strong, many collaborations with countries in the West have broken down since the invasion of Ukraine, prompting a shift to new partners in India and China.

After intense internal discussions, CERN, the European particle-physics powerhouse near Geneva, Switzerland, voted in December 2023 to end ties with Russian research institutions once the current agreement expires in November this year. And the war has severely disrupted science in the Arctic, where Russia controls about half of a region that is particularly vulnerable to climate change. A study1 this year gave a sense of how collaborative projects could be affected by losing Russian data: excluding Russian stations from the International Network for Terrestrial Research and Monitoring in the Arctic causes shifts in project results that are in some cases as large as the total expected impact of warming by 2100.

Reports also suggest that political oppression combined with the threat of military draft have led to a ‘brain drain’ among scientists. Getting an accurate headcount is challenging, but a January estimate by the Latvia-based independent newspaper Novaya Gazeta Europe, based on researchers’ ORCID identifiers, says at least 2,500 researchers have left Russia since February 2022.

Researchers who stayed in Russia have had to contend with serious supply-chain disruptions as well as personal risks. And international sanctions on Russia might have hit even the most productive scientists: according to a January 2024 paper co-authored by Dezhina, which surveyed some of the most published and cited Russian researchers, three out of four of them report at least some fallout from sanctions, mostly economic ones2.

Russia’s isolation has particularly affected the medical sciences, because it means that international clinical trials are no longer held there, says Vasily Vlassov, a health-policy researcher at the Higher School of Economics University in Moscow. He fears that being cut off from the global community will erode Russia’s expertise in this fast-moving and technically complex field: “It’s a problem we have yet to fully appreciate.”

Researchers in the social sciences and humanities are less dependent on overseas partners, but they are affected by increasingly nationalist ideology, says a Russian researcher who asked to remain anonymous. When reviewing articles for publication in Russian journals, the researcher says, they are seeing an increasing number of submissions blaming problems in research and higher education on ‘the collective West’, a common propaganda term. “It’s everywhere, and it’s poisoning minds.”

Uncertain future

The election outcome serves as a reminder of the ongoing war and the openly totalitarian environment in Russia, says Alexander Kabanov, chief executive of the Russian-American Science Association, a US-based non-profit organization. “We are still dealing with an ongoing disaster,” he says.

Yet the impacts of sanctions on Russian science are beginning to fade from public consciousness in other countries. Pierre-Bruno Ruffini, who studies science diplomacy at Le Havre University-Normandy in Le Havre, France, says that academic sanctions and their consequences have “rapidly and completely disappeared” from discussions in the French research community. Dezhina agrees, and adds that, in her experience, even cooperation between individual scientists, once seen as a promising workaround for institutional bans, is on the decline.

Researchers in exile are working on an alternative to the state’s vision of the future for Russia and national science. A policy paper published earlier this month by Reforum, a European project that aims to create a “roadmap of reforms for Russia”, presents a to-do list for revitalizing Russian research. Three out of five of the tasks listed focus on bringing it back into the international fold. Olga Orlova, a science journalist who wrote the policy paper, thinks that scientists in Russia have a part in building that future.

“They shouldn’t be afraid of the change — they should be working for it,” she says.

[ad_2]

Source Article Link