Skip to content

Breaking News

Local News |
National Science Foundation grant to help researchers create algorithms that promote system fairness

Author
PUBLISHED: | UPDATED:

A new University of Colorado Boulder study, funded with a three-year $930,000 grant from the National Science Foundation, aims to make algorithms used to make recommendations more fair.

The study will look at recommender systems, which are personalized access systems that recommend items to individuals, and ways to make the systems more fair to reduce disadvantages and biases.

Robin Burke, CU Boulder professor and Information Science Department Chair, said this technology is prevalent and is used in a lot of applications, including Netflix, Spotify, LinkedIn and others. These systems find items individuals may be interested in but have a natural tendency to promote things that are popular, which can disadvantage others.

“It’s an interesting technical challenge certainly, but I think it’s also an obligation,” Burke said. “If you’re building this kind of technology, you want to make sure that it’s beneficial.”

Burke will be working alongside CU Boulder Associate Professor Amy Voida and Tulane University Assistant Professor Nick Mattei in a partnership with Kiva, a nonprofit microlending organization that crowdfunds loans for underserved communities. Burke, who has been working with Kiva for about three to four years, said the study will involve working with the nonprofit to help the organization recommend loans to users more fairly.

Burke said the researchers will first try to understand fairness as a concept in the context of Kiva’s mission. Researchers will conduct interviews and focus groups to help them understand what the organization’s mission is. The researchers will then turn those ideas into algorithms that preserve fairness concerns while providing personalized recommendations and put the systems into practice.

Jessie Smith, a Ph.D. student at CU Boulder, said Kiva is an example of a high-stakes domain for machine learning because the platform can have real impacts for people. Smith will be helping to conduct the interviews with Kiva employees and interpreting those responses into algorithm design in the later stages of the research.

“If certain borrowers on the site aren’t recommended fairly, they could consistently become underfunded on the platform,” Smith wrote in an email. “Alternatively, if certain borrowers are over-recommended on the platform, they could consistently get funded over other equally deserving borrowers. The concerns for fair treatment within the machine learning application of recommendations on this platform are very real and have very real consequences to people all across the world.”

Smith said understanding fairness and being transparent about how algorithms impact people’s lives is important in creating a more equitable technology-filled future.

“I hope this study will help create more ‘fair’ outcomes for various stakeholders of Kiva’s microlending platform, and I also hope that it will act as a preliminary case study for operationalizing fairness in recommender systems in a real-world setting, while still preserving other common machine learning priorities (such as accuracy, user retention, privacy or efficiency),” Smith said.

Burke said he is hoping to contribute to the scientific community with the study by showing an example of the process of creating practical machine learning fairness initiatives from the beginning with stakeholder consultation to the end with addressing fairness concerns. Burke said he is also hoping to validate ideas about the social choice framework in order to integrate multiple fairness concerns.

“I think the other important outcome is we’re going to have students, graduate students and undergraduate students involved in the project, and I want to get people interested in this idea of machine learning fairness, and developing tools around enabling similar kinds of things in other organizations,” Burke said.

Researchers and the nonprofit will work throughout this study to develop a suite of tools that companies and nonprofits can use to create their own customized algorithms that are aware of fairness, according to a press release.