Over the past decade, there has been no shortage of examples of human bias creeping into artificial intelligence processes.
Back in 2020, Robert Williams, a Black Farmington Hills resident, was arrested and jailed after a police facial recognition algorithm incorrectly identified him as a man stealing from security footage — a known weakness that such systems have in accurately identifying people with dark skin. In 2019, researchers showed that a software program widely used by hospitals to identify patient risks favored white people in many types of care. A few years ago, Amazon largely abandoned the system it used to screen job applicants when it discovered that it consistently preferred men over women.
How human bias is baked into AI algorithms is a complex matter.
Bias doesn’t have just one source, but bias problems often stem from the ways in which AI systems categorize and interpret data. The power of many artificial intelligence systems lies in their ability to recognize patterns and categorize objects, but it is important to note that the process often begins during training when they learn from us. For example, consider an image recognition algorithm that allows you to find all the cat pictures on your phone. The program’s intelligence begins during training when the algorithm analyzes known images of cats selected by a human. Once the system had seen enough examples of cats, it acquired a new intelligence: the ability to make important features of cat-ness, which allowed it to decide that an image it had never seen before was an image of a cat.
The important thing to note about the example above is that the intelligence of the algorithm is built on the foundation of human judgment calls. In this case, the main human judgment is the first choice of images that the person is willing to be cats, and in this way, the intelligence of the machine is combined with our “bias” of what the cat looks like. Sorting through cat photos is innocent enough, and if the algorithm makes a mistake and thinks your dog looks like a cat, it’s no big deal. But when you start asking AI to perform complex tasks, especially those embedded with very important human concepts like race, gender and sexuality, the mistakes made by algorithms are no longer dangerous. If a facial recognition system is so accurate that it identifies dark-skinned people because it’s trained specifically on white faces, and someone ends up wrongfully arrested because of it, that’s obviously a big problem. Because of this, figuring out how to reduce bias in our artificial intelligence tools, now widely used in banking, insurance, healthcare, recruiting and law enforcement, appears to be one of the most important challenges facing AI engineers today.
University of Pennsylvania Professor and UM School of Social Work alum Desmond Patton has been helping pioneer an interesting approach to addressing AI bias. In his latest speech in our Caught Leaders speaker series, Patton argued that one of the biggest problems – and one that could be solved – is that we haven’t had all the right voices at the table when this technology is being developed and critical human judgment. molds are made. Historically, AI systems have been the domain of technology companies, data scientists and software engineers. And while that community has the technical skills needed to create AI systems, it often lacks the sociological expertise that can help protect systems against bias or publicize harmful uses. Social scientists, social workers, psychologists, health workers – they are specialists in people. And since the problem of AI bias is both technical and human, it makes sense that human experts and technical experts should work together.
Columbia University’s SAFE Lab, directed by Patton, is an interesting example of what this could look like in practice. Their team is trying to create algorithmic programs that can use social media data to identify indicators of psychological and social phenomena such as anger, drug abuse, loss and sadness — with the ultimate goal of being able to better intervene in people’s lives. It is a very complex artificial intelligence problem, so they throw a diverse team at it: social workers, computer scientists, computer technicians, engineers, psychiatrists, nurses, young people and members of the public. One of the really interesting things they are doing is using social workers and local residents to properly interpret social media data so that the programmers creating the algorithms have the right interpretations. For example, Patton says, one day, he got a call from one of the show’s editors out of concern that the show was flagging the N-word as an “aggressive” word. That may be the appropriate classification if they are studying white groups. But given that their focus communities are black and brown urban areas, the term was used differently. Having that kind of contextual information gave them a way to tweak the algorithm and make it better.
Patton says the SAFE Lab’s work also draws on the local knowledge of community members. “The difference in the way we do this work is found in what we call domain experts,” said Patton. “We [hire] young black and brown people from Chicago and New York City as research assistants in the lab, and we pay them like we pay graduate students. They spend time helping us translate and interpret the context. For example, the names of streets and institutions have different meanings depending on the situation. You can’t just look at a street on the South Side of Chicago and be like, ‘that’s just a street.’ That street may also be an invisible border between two rival gangs or factions. We wouldn’t know that unless we talked to people.”
Patton thinks approaches like these could transform artificial intelligence for the better. He also sees today as an important moment of opportunity in the history of AI. If the Internet as we know it evolves into something like a metaverse – a space that combines the virtual reality of work and social life – we will have a chance to learn from the mistakes of the past and create a more useful, equitable and fun environment. But doing so will mean that we no longer see our technology as technology, but as human creations that require input from the full spectrum of humanity. It will mean that universities train programmers to think like sociologists rather than good coders. The police and social services departments will have to find meaningful ways to work together. And we’ll need to create more opportunities for community members to work with academic experts like Patton and his SAFE Lab team. “I think that social work allows us to have a framework for how we can ask questions to begin the process of building behavioral technology programs,” Patton said. “We need the involvement of all members of the community – influencing who will be at the table, who is teaching, and how they are being taught, if we are going to fight bias.”