Design approach may help fix bias in artificial intelligence

C C Offline

RELEASE: Bias in artificial intelligence (AI) and machine learning programs is well established. Researchers from North Carolina State University and Pennsylvania State University are now proposing that software developers incorporate the concept of "feminist design thinking" into their development process as a way of improving equity - particularly in the development of software used in the hiring process.

"There seem to be countless stories of ways that bias in AI is manifesting itself, and there are many thought pieces out there on what contributes to this bias," says Fay Payton, a professor of information systems/technology and University Faculty Scholar at NC State. "Our goal here was to put forward guidelines that can be used to develop workable solutions to algorithm bias against women, African American and Latinx professions in the IT workforce.

"Too many existing hiring algorithms incorporate de facto identity markers that exclude qualified candidates because of their gender, race, ethnicity, age and so on," says Payton, who is co-lead author of a paper on the work. "We are simply looking for equity - that job candidates be able to participate in the hiring process on an equal footing."

Payton and her collaborators argue that an approach called feminist design thinking could serve as a valuable framework for developing software that reduces algorithmic bias in a meaningful way. In this context, the application of feminist design thinking would mean incorporating the idea of equity into the design of the algorithm itself.

"Compounding the effects of algorithmic bias is the historical underrepresentation of women, Black and Latinx software engineers to provide novel insights regarding equitable design approaches based on their lived experiences," says Lynette Yarger, co-lead author of the paper and an associate professor of information sciences and technology at Penn State.

"Essentially, this approach would mean developing algorithms that value inclusion and equity across gender, race and ethnicity," Payton says. "The practical application of this is the development and implementation of a process for creating algorithms in which designers are considering an audience that includes women, that includes Black people, that includes Latinx people. Essentially, developers of all backgrounds would be called on to actively consider - and value - people who are different from themselves.

"To be clear, this is not just about doing something because it is morally correct. But we know that women, African Americans and Latinx people are under-represented in IT fields. And there is ample evidence that a diverse, inclusive workforce improves a company's bottom line," Payton says. "If you can do the right thing and improve your profit margin, why wouldn't you?"

The paper, “Algorithmic equity in the hiring of underrepresented IT job candidates,” is published in the journal Online Information Review. The paper was co-authored by Bikalpa Neupane of Penn State.

Possibly Related Threads…
Thread Author Replies Views Last Post
  New report assesses progress and risks of artificial intelligence C C 0 11 Sep 17, 2021 01:58 AM
Last Post: C C
  Artificial Intelligence learns better when distracted C C 0 19 Jul 30, 2021 07:30 PM
Last Post: C C
  Scientists use artificial intelligence to detect gravitational waves C C 0 25 Jul 7, 2021 10:50 PM
Last Post: C C
  Algorithms cannot contain a harmful artificial intelligence C C 1 101 Jan 11, 2021 05:57 PM
Last Post: stryder
  Artificial intelligence makes blurry faces look more than 60 times sharper C C 0 88 Jun 14, 2020 12:27 AM
Last Post: C C
  Artificial Intelligence shows why atheism is unpopular C C 2 730 Oct 27, 2018 04:49 AM
Last Post: Syne
  Artificial intelligence can determine lung cancer type C C 0 348 Sep 17, 2018 05:54 PM
Last Post: C C
  AI rest my case (Is artificial intelligence too dull?) C C 0 335 Jun 25, 2018 04:32 PM
Last Post: C C
  Artificial intelligence researchers must learn ethics C C 0 392 Sep 1, 2017 05:39 PM
Last Post: C C
  Biased bots: Human prejudices sneak into artificial intelligence systems C C 0 532 Apr 14, 2017 08:03 PM
Last Post: C C

Users browsing this thread: 1 Guest(s)