Today, machine learning helps us figure out what loans we qualify for, what jobs we get, and even who goes to jail. But can computers make fair choices when it comes to these potentially life-changing decisions? In a study published September 29 in the journal Patterns , German researchers show that under human supervision, people believe that a computer decision can be just as fair as a decision made by humans.
"Much of the discussion about fairness in machine learning has focused on technical solutions, such as how to fix unfair algorithms and make systems fair," says co-author Robin Bach, a computational sociologist at the University of Mannheim in Germany. "But our question is, what do people think is right? It's not just about developing algorithms. They must be accepted by society and conform to standard real beliefs.”
Automated decision making, where only the computer performs the inference, is best suited for analyzing large data sets to look for patterns. Compared to humans, whose biases can cloud judgement, computers are often viewed as objective and neutral. However, biases can seep into computer systems when they learn from data that reflects patterns of discrimination in our world. Understanding fairness in IT and people decisions is essential to building a fairer society.
To understand what people think is right in automated decision-making, researchers surveyed 3,930 people in Germany. The researchers presented them with hypothetical scenarios related to banking systems, work, incarceration and unemployment. In the scenarios, they compared different situations, including whether the decision resulted in a positive or negative outcome, where the evaluation data came from, and who made the final decision—a person, a computer, or both.
"As expected, we found that fully automated decision-making is undesirable," says first author Christoph Kern, a computational sociologist at the University of Mannheim. "But what's interesting is that when one controls for automated decision-making, the level of perceived fairness is similar to human-centered decision-making." The results showed that when people are involved in decision making, they perceive that decision as fairer.
People were also more concerned about fairness when making decisions about the criminal justice system or better paid job prospects. Losses were likely to outweigh gains, and participants rated decisions that could lead to positive outcomes as fairer than negative decisions. Compared to systems based solely on scenario data, systems based on additional irrelevant data from the Internet were considered less fair, highlighting the importance of data transparency and confidentiality. Together, the results show that context matters. Automated decision-making systems must be carefully designed when capital issues arise.
Although the hypothetical situations in the survey do not exactly correspond to the real world, the team is already considering next steps to better understand equity. They plan to conduct further research to understand how different people define fairness. They also want to use similar surveys to ask more questions about ideas like fair distribution and fair distribution of resources within the community.
"In a way, we hope that people in the industry will take these results as food for thought and testing before they design and implement automated decision making," Bach said. "We also need to make sure people understand how data is processed and how decisions are made based on it."
Source of the story:
Material reproduced with kind permission of Cell Press . To note that. Content is subject to change in style and length.