Reliance on Algorithms in Housing Decisions Can Lead to Discrimination

For years, the federal government supported the unwarranted denial of non-White people from housing opportunities. For example, since its founding in 1934, the Federal Housing Administration insured more than 40 million home loans, and for decades, due to pernicious, commonplace, and unchecked racism, these loans went almost exclusively to White people. This direct and in-the-open discrimination was simply a matter of course, how business was done – sanctioned by the US government. Long story short, over time, this has led to generations of divestment, lost opportunities, lost fortunes, grave health disparities, and countless other irreversible consequences for Black (and other) individuals and families in the United States. This is just one of the many forms of discrimination woven into the country’s housing and finance systems. The collective, devastating impact of this practice and other mechanisms has created fundamentally entrenched segregation and an always-increasing gap in homeownership rates between Americans of different races.

So, fast forward to the 21st century, where one of the new faces of discriminatory practices has emerged: the use of computers to aid in decision-making. While it is reasonable to think that replacing the judgment of a human with a machine might help to minimize the effects of unreasonable bias, the truth is that such bias persists, same as ever. Now of course he or she might try to hide it, but a person sitting in front of you might inadvertently display body language or utter remarks that are red flags about her or his prejudices. Just a person’s manner of talking to you in asking you questions can reveal a lot about what that person thinks of you before you’ve even attempted to formulate a single answer in your own mind! But there is just as much bias happening – that no one could ever see on the outside – in the circuitry of computers, while they are being trusted by financial institutions to take control in considering whether people should get home loans and which type of loans should be made available to whom.

Recently, a fair housing advocate conducted a simple test. She did image searches on Google using positive adjectives like “beautiful” and “handsome,” to see what kinds of people would come up. Sadly, her searches yielded lopsided results, where the vast majority of the pictures showed White and fair-skinned people, not nearly showing the diversity that actually exists in the world. It begged the question – why would even a machine search show a bias for certain kinds of people? The answer is because, consciously or unconsciously, the creators of the search’s algorithms (programmed instructions to make calculations and analysis) infused bias into the processing of that search.

Multiple studies have shown that housing discrimination in mortgage lending is as pervasive and widespread as ever, even with the increased reliance on computers (that is, even by trying to take prejudiced humans out of the process). In this age of using computers to make decisions more than ever before, Black and Hispanic borrowers still pay $765 million more in home mortgage interest per year than their White counterparts who are equally qualified. So, even computers, in looking at EQUALLY qualified applicants for mortgage loans, are ultimately coming to the conclusion that people of color – because they are people of color – should collectively pay over three quarters of a billion dollars more in interest every year than White people. Also, in terms of home loan denial rates, Black people are still more than twice as likely to be denied than their White counterparts.

When people talk about “AI,” they’re often talking about artificial intelligence. That refers to the way that computers handle a situation by using logic to make decisions, logic that otherwise might be considered by a human being. An example could be like this: you ask a computer what factors are considered in deciding a family’s creditworthiness for a home loan. The computer recognizes your request, interprets that you are looking to know about what counts toward credit, and provides you with this information, using the internet or another source.

On top of this, there is a concept called “machine learning,” where, in the framework of artificial intelligence, a computer or program can change based on information that it receives, using algorithms to look for patterns in that information and then form a new sense of understanding about the topic at hand. Continuing the above example: maybe in response to your asking about what counts toward credit, the computer has previously included only a basic list – money on hand, loan history, and debt-to-income ratio. But through your feedback and/or via information on the internet, the computer realizes that there is more to the story, and with future searches on what counts toward credit, also now lists employment history, zip code, rental history, and other criteria that weren’t on the original list.

Using machines to make decisions is now a very common practice. Many people interact with artificial intelligence on a daily basis. Sometimes you’ll know it when you see it, such as dealing with an automated system with a robotic voice when you call into customer service for a company. Other times, you may be evaluated by computers behind the scenes. Without you ever knowing it, banks and other entities might actually be using computers (instead of people on their staff) to determine your creditworthiness, interest rate, loan terms, etc. This helps businesses and organizations save money, because computers are faster than people and don’t require an office to work in nor a salary.

But this use of machines can also come at a great cost to society. As mentioned before, computer-based decisions often include considerations that perpetuate discrimination. It may not actually be racist human policy-makers verbally stating that certain people just don’t deserve opportunities. But when people who harbor those very attitudes provide the foundation for the computer programs that are stepping in to make the decisions, they simply teach the machines to mirror their biases.

Programmers can (and do) tell computer systems to look for and base decisions on criteria – the same criteria that a human would use in that machine’s place. But that criteria, historically, is rooted in prejudice, and when that computer system looks at outcomes evident in a real world chock-full of biased decisions, the system has no choice but to follow along and discriminate, ironically trying to “do the job right.” As there is infinite information of housing discrimination by humans in transactions based on race, gender, religion, and other characteristics, a computer that is taught to simply step in for these humans – in order to save a financial institution time and money – will draw upon this information and learn its lessons from that lopsided and unequal legacy. And while not feeling any social pressure to be sensitive to, and learn from, accusations of bias or favoritism (feelings that an embarrassed or chastised human being might feel), such a computer will, by design, continue that legacy.

In other words, with housing-related decisions, computers are used to fill in for people, making decisions as people would, which computers know because they draw upon the history of what people have previously done. So the computers act just like people have. And unfortunately, people have been atrocious (please see first the paragraph, above).

A programmer may not even realize the danger of assuming that a computer can’t lead toward biased decisions. Even if race itself is not specifically named among the criteria being used by a computer’s system, for example, other information that is considered may nonetheless implicate race – like one’s zip code or last name. Again, if a computer is informed by historic and current patterns of home loan and credit decisions made by human beings, the computer is going to get lessons on discrimination, and will end up developing biases – thinking that it is simply doing its job.

To combat all of this, people have called for increased transparency in the decision-making of these artificial intelligence systems, to root out and correct biases embedded within the programming. Additionally, government oversight of these processes can help to curtail the continued illegal discrimination practices plaguing the United States housing market. To be sure, with so much at stake, increased transparency and oversight are not much to ask for. Whether pivotal housing decisions are being made by computers or still by humans, to paraphrase the professor Dr. Darrick Hamilton – the goal is that one’s demographics have no bearing on the outcomes of one’s transactions. This is what we need to strive toward.