Artificial intelligence software can be taught to emulate human decision making. But that means it can be taught to emulate human prejudices, like racism or sexism. Two California high school students, in partnership with a computer scientist and an undergraduate at the Massachusetts Institute of Technology, say they’ve found a way to prevent this, at least in some cases.
Brothers Jashandeep and Arashdeep Singh worked with MIT researcher Amar Gupta and second-year student Ariba Khan to create DualFair, a new method of training the artificial intelligence software that helps banks decide whether to approve mortgage applications.
The Singh brothers attend Floyd Buchanan High School in Clovis, Calif. Both are Sikhs, practitioners of a religion that originated in India about 600 years ago. They say that they have routinely been subjected to ethnic and religious slurs. “We were really passionate about working toward that idea of negating discrimination and helping other people who are facing discrimination,” said Jashandeep Singh.
Gupta, who led the effort, said that he usually teams up with MIT students on such projects through a program called Hackers Heaven, which targets ambitious undergrads. But exceptional high schoolers who apply are sometimes welcomed. When the Singh brothers asked to participate, Gupta decided to give them a chance.
Advertisement
“Generally we are very selective,” he said. “In the case of high school students we are even more selective. … This was almost destiny that this happened.”
AI programs learn to make decisions by studying datasets made up of millions of real-world examples. AI mortgage-lending programs use datasets containing many previous mortgage applications. By studying which applications were approved and which were not, the software learns to recognize the characteristics of a trustworthy borrower.
But these millions of old mortgage applications were processed by humans, some of them prejudiced against lending to Black people or Hispanic people or women. Adding their biased decisions to the training database could teach the AI software to make similarly prejudiced decisions.
Advertisement
DualFair isn’t an artificial intelligence program. Instead, it’s a way to prepare the training databases used by such programs. It uses a variety of techniques to analyze previous home mortgage decisions collected by the federal government. It then tries to weed out data that could teach an AI to make unfair judgments not only about race, but also gender and ethnicity.
For instance, if an AI system in training says that a Black female applicant should be rejected for a loan, the DualFair method tests for bias by attaching different attributes to the same applicant and retesting them; it might make the applicant white or male or Hispanic, without altering any other data in the application. If the AI grants a loan when the same person is tagged as, say, a Hispanic male, that’s evidence of AI bias. Anytime this happens, the particular applicant is removed from the training database. Only applications that are approved or disapproved regardless of race, gender, or ethnicity are used for the AI training.
DualFair also makes sure to include an equal percentage of accepted and rejected applications for each possible combination of race, ethnicity, and gender.
When a mortgage-lending AI was trained using DualFair and tested on real-world mortgage data from seven US states, the system was less likely to reject applications of otherwise qualified borrowers because of their race, sex, or ethnicity.
Advertisement
The team’s research was published online in March by the peer-reviewed academic journal Machine Learning and Knowledge Extraction. Now the researchers hope to find out if DualFair can be used to eliminate bias in other types of AI programs, including software used by physicians. For example, Gupta supervised a study that found that Black hospital patients are given painkillers less frequently than white patients. A medical version of DualFair, if it works, could redress the imbalance and protect Black patients from needless suffering.
Meanwhile, the Singh brothers have been admitted to MIT for the fall semester. But they haven’t yet decided whether they’ll accept. Both said they’ve been accepted at several other colleges as well.
Hiawatha Bray can be reached at
hi***********@gl***.com
. Follow him on Twitter @GlobeTechLab.
Digital Access
Home Delivery
Gift Subscriptions
Log In
Manage My Account
Customer Service
Help & FAQs
Staff List
Advertise
Newsletters
View the ePaper
Order Back Issues
News in Education
Search the Archives
Privacy Policy
Terms of Service
Terms of Purchase
Work at Boston Globe Media