iSchool's Caliskan wins award to battle bias in artificial intelligence

By Michelle Dunlop Monday, September 9, 2024

Imagine losing out on your dream job due to bias in AI tools used in the resume screening process or having your health care compromised for the same reason.

Those are the disturbing scenarios that Aylin Caliskan, an assistant professor in the University of Washington Information School, is dedicated to thwarting.

Caliskan was recently awarded a $603,342 National Science Foundation Faculty Early Career Development (CAREER) Award for her project titled, “The Impact of Associations and Biases in Generative AI on Society.” She is planning to develop computational methods to measure biases in generative artificial intelligence systems and their impact on humans and society. Caliskan says her goal is reducing bias in AI and human-AI collaboration.

“Hopefully, in the long term, we will be able to raise awareness and provide tools to reduce the harmful consequences of bias,” said Caliskan, who became a co-director of the UW Tech Policy Lab earlier this year. Her research in computer science and artificial intelligence will also provide empirical evidence for tech policy.

Caliskan noted that AI is used in a variety of places that many people don’t realize. Companies often use AI to screen job applications; some colleges use it to screen student applications; and health-care providers use AI in reviewing patient data. 

But because AI is trained on data produced by humans, it learns biases similar to those found in society. Women and people of different ethnicities are more frequently discriminated against in AI than white males, Caliskan said. She cited an example from current use of generative AI in health care, where African American patients may receive less effective or lower-cost medications when prescribed through AI than patients of European descent.

Caliskan’s work was among the first to develop methods to detect and quantify bias in AI. One of the difficulties she faces is that AI doesn’t work or “think” exactly like humans, despite being developed by them. However, AI is being used on a large scale and is helping to shape society. 

Another challenge for Caliskan is that not all AI is the same. Many companies have their own proprietary AI systems that they may or may not be willing to allow researchers like Caliskan to study. 

One of the keys to reducing bias in AI is understanding the mechanisms of bias and where the bias originated, she said. Some bias is cultural, societal or historical. Figuring out what is “fair” in a specific context and task isn’t trivial.

“There are many fairness notions,” Caliskan said. “We don’t have simple, straightforward answers to these complex open questions.”

Caliskan notes she grew up a multilingual immigrant, which fostered her interest in the subject of fairness. She speaks German, Turkish and Bulgarian as well as English.

“I was able to observe and live in different cultures in my childhood and observe different societies,” she said. “I have always been fascinated by culture and languages.”

Since coming to the UW, Caliskan has been invited to speak at AI-related events at Stanford University, Howard University, the Santa Fe Institute, and the International Joint Conferences on Artificial Intelligence. Her paper rigorously showing that AI reflects cultural stereotypes was published in Science magazine. 

In 2023, Caliskan was listed among the 100 Brilliant Women in AI Ethics by the Women in AI Ethics organization. She previously received an NSF award for her work on privacy and fairness in planning while using third-party sources. Caliskan is teaching a course on generative AI literacy this fall. 

Caliskan’s NSF grant will last for five years, but she doesn’t see her work on the subject ending then. Late last year, she was awarded a $1,043,249 grant from the National Institute of Standards and Technology for a similar project, “Human-AI Bias Interaction in Decision Making.”

“I see this research going on my entire life,” she said. “Since bias cannot be entirely eliminated, this is a lifelong problem.”

However, Caliskan believes that identifying, measuring and reducing bias can help align AI with societal values and raise awareness.

“I don’t think eliminating bias entirely in AI or people is possible,” she said. But “when we know we are biased, we adjust our behavior.”