Can AI’s bias problem be fixed?


Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

Credit: Juan Gaertner/Science Photo Library

For the first time, an AI system has helped researchers to design completely new antibodies. An algorithm similar to those of the image-generating tools Midjourney and DALL·E has churned out thousands of new antibodies that recognize certain bacterial, viral or cancer-related targets. Although in laboratory tests only about one in 100 designs worked as hoped, biochemist and study co-author Joseph Watson says that “it feels like quite a landmark moment”.

Nature | 4 min read

Reference: bioRxiv preprint (not peer reviewed)

US computer-chip giant Nvidia says that a ‘superchip’ made up of two of its new ‘Blackwell’ graphics processing units and its central processing unit (CPU), offers 30 times better performance for running chatbots such as ChatGPT than its previous ‘Hopper’ chips — while using 25 times less energy. The chip is likely to be so expensive that it “will only be accessible to a select few organisations and countries”, says Sasha Luccioni from the AI company Hugging Face.

New Scientist | 3 min read

A machine-learning tool shows promise for detecting COVID-19 and tuberculosis from a person’s cough. While previous tools used medically annotated data, this model was trained on more than 300 million clips of coughing, breathing and throat clearing from YouTube videos. Although it’s too early to tell whether this will become a commercial product, “there’s an immense potential not only for diagnosis, but also for screening” and monitoring, says laryngologist Yael Bensoussan.

See also  Adobe Lightroom presenta Generative Remove, la última herramienta que utiliza Firefly AI

Nature | 5 min read

Reference: arXiv preprint (not peer reviewed)

In blind tests, five football experts favoured an AI coach’s corner-kick tactics over existing ones 90% of the time. ‘TacticAI’ was trained on more than 7,000 examples of corner kicks provided by the UK’s Liverpool Football Club. These are major opportunities for goals and strategies are determined ahead of matches. “What’s exciting about it from an AI perspective is that football is a very dynamic game with lots of unobserved factors that influence outcomes,” says computer scientist and study co-author Petar Veličković.

Financial Times | 4 min read

Reference: Nature Communications paper

Features & opinion

AI image generators can amplify biased stereotypes in their output. There have been attempts to quash the problem by manual fine-tuning (which can have unintended consequences, for example generating diverse but historically inaccurate images) and by increasing the amount of training data. “People often claim that scale cancels out noise,” says cognitive scientist Abeba Birhane. “In fact, the good and the bad don’t balance out.” The most important step to understanding how these biases arise and how to avoid them is transparency, researchers say. “If a lot of the data sets are not open source, we don’t even know what problems exist,” says Birhane.

Nature | 12 min read

Amplified stereotypes. Chart showing the difference between self-identification of people working in different professions and AI model output.

Source: Ref. 1

AI regulation

The European Union’s sweeping new AI law has cleared one of its last bureaucratic hurdles and will come into force in May.

Some ‘high-risk’ uses of AI, such as in healthcare, education and policing, will be banned by the end of 2024.Companies will need to label AI-generated content and will need to notify people when they are interacting with AI systems.Citizens can complain when they suspect an AI system has harmed them.Some companies, such as those developing general-purpose large language models, will need to become more transparent about their algorithms’ training data.

See also  Your phone's blue light won't actually stop you sleeping, according to an expert – but your phone is still the problem

MIT Technology Review | 6 min read

India has made a U-turn with its AI governance by scrapping an advisory that asked developers to obtain permission before launching certain untested AI models. The government now recommends that AI companies label “the possible inherent fallibility or unreliability of the output generated”.

The Indian Express | 3 min read

The African Union has drafted an ambitious AI policy for its 55 member nations, including the establishment of national councils to monitor responsible deployment of the technology. Some African researchers are concerned that this could stifle innovation and leave economies behind. Others say it’s important to think early about protecting people from harm, including exploitation by AI companies. “We must contribute our perspectives and own our regulatory frameworks,” says policy specialist Melody Musoni. “We want to be standard makers, not standard takers.”

MIT Technology Review | 5 min read

In 2017, eight Google researchers created transformers, the neural-network architecture that would become the basis of most AI tools, from ChatGPT to DALL·E. Transformers give AI systems the ‘attention span’ to parse long chunks of text and extract meaning from context. “It was pretty evident to us that transformers could do really magical things,” recalls computer scientist Jakob Uszkoreit who was one of the Google group. Although the work was creating a buzz in the AI community, Google was slow to adopt transformers. “Realistically, we could have had GPT-3 or even 3.5 probably in 2019, maybe 2020,” Uszkoreit says.

Wired | 24 min read



Source Article Link

Leave a Comment