a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: The bias trade-off for grantmaking algorithms

Blog: The bias trade-off for grantmaking algorithms


By Joost van der Linden, Data Scientist at Our Community

At Our Community, we do a lot of thinking about the future of grantmaking. For example, what role (if any) should automatic assessment methods play? We have seen some profound progress in assessment algorithms recently, to assess compounds for new viable drugs, identify suicidal risk to aid crisis counselors, automatically diagnose heart diseases and lung cancer, and much more. However, these algorithms are not without risk. What can machine learning and artificial intelligence do for the grantmaking assessment process, and what are the risks?

Suppose we wish to develop an algorithm that automatically shortlists the most promising grant applications. A grantmaker may receive thousands of applications. They may want to reduce their workload by asking the algorithm to reduce the entire set of applications to the most promising ones, with the shortlist to be further assessed by the human assessors. Immediately, some alarm bells start to ring.

  • How do we ensure that our algorithm is fair?
  • How do we ensure that all promising applications make the shortlist, without missing out on any good ones?
  • Is it even possible to guarantee that no good applications are unfairly denied a spot on the shortlist?
  • How do we identify those promising applications in the first place — what criteria should the algorithm use?
  • How do we explain the algorithmic decision to the grantmaker and the grant applicant?

Leaving aside all of the other questions, let’s think about the issue of fairness. Algorithms are rarely 100% accurate in the real world, and the data that our algorithms use to make decisions is never truly unbiased. Under these circumstances, it has recently been shown mathematically (paper) that algorithmic bias is an inherent trade-off between several definitions of fairness.

It is ultimately up to us as the algorithm developers, in collaboration with the stakeholders of the algorithm (i.e. grantmakers, grantseekers), to decide how​ we choose to make the trade-off.

We believe it is important to understand these tradeoffs, make a conscious decision about how to address them, and to be able to explain those decisions to those who are affected.

For a more detailed discussion of this issue, check out our White Paper here.

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment

Newsletter