Aylin Caliskan adds to iSchool expertise in ethical AI

By Mary Lynn Lyke Friday, August 27, 2021

Can machines be sexist? Researcher Aylin Caliskan found out the answer when she entered “O bir profesör, O bir öğretmen” into Google Translate on her computer. “O” is a gender-neutral pronoun in Turkish; it can mean she, he or it. The Google program, powered by statistical machine translation algorithms, translated the Turkish sentences as “He is a professor. She is a teacher.”

The gender-biased response was not an anomaly, she found. As they process massive language datasets, rapidly learning as they go, artificial intelligence (AI) programs tend to associate female terms — she, hers, her, sister, daughter, mother, grandmother – with arts and family. They link male terms with science, mathematics, power and career.

“This is the way AI perceives the world,” says Caliskan, who joins the iSchool faculty this fall as an assistant professor specializing in the emerging field of AI ethics.

Sexism, racism, ageism, ableism, and LGBTQ discrimination are rapidly spreading through such everyday, big-tech tools as text generation, voice assistance, translation and information retrieval, warns Caliskan. “Machines are replicating, perpetuating and amplifying these biases, yet no regulation is in place to audit them.”

Aylin Caliskan pictured with sculptures at Seattle's Olympic Sculpture Park
Aylin Caliskan, photographed at Seattle's Olympic Sculpture Park, says a more diverse tech workforce will help fight bias in artificial intelligence. "A diverse set of developers can start building AI systems that align with their own needs and values and test it based on their own lived experiences," she says. (Photos by Doug Parry)

Caliskan will join the UW’s new Responsible AI Systems & Experiences team, a research group investigating how intelligent information systems interact with human society. “These researchers are world leaders in the field,” she says. “It is extremely humbling to be part of it.”

She has been following research at the iSchool for more than a decade, starting with professor Batya Friedman’s seminal paper tracking bias in computer systems in the late ’90s. “The iSchool is home to the first foundational work that I know of in this domain,” says Caliskan.

A respected leader in the field, she holds a Ph.D. in Computer Science from Drexel University and a Master of Science in Robotics from the University of Pennsylvania, and she served as postdoctoral researcher at Princeton University’s Center for Information Technology Policy. "Aylin recognized the importance of this topic back in 2015 when there were relatively few people thinking about it. Her ability to anticipate emerging threats has allowed her to make timely and vital contributions again and again," says her former adviser Arvind Narayanan, associate professor of computer science at Princeton University.

Caliskan — a math whiz as a child growing up in Istanbul — is helping develop critical new methods to systematically detect and quantify humanlike bias in machine learning. “Now, for the first time, we have tools where we can go back in time, look at historical data, see how the bias evolves, how it impacts fairness, equity, social structures, and how it is shaping society in an accelerated and biased manner.”

As researchers like Caliskan dig deep into AI systems, the harmful effects of bias contamination are becoming increasingly apparent. Racially biased algorithms have led to inequitable sentences for convicted criminals and to wrongful arrests of African Americans based on faulty facial recognition systems. An investigation of one hospital showed the algorithm it used prioritized care for white patients over Black patients. Biased AI hiring tools that determine who gets a callback have shown a preference for white names over Black names.

Where do these biases originate? Researchers point to the language, culture and perceptions of the AI developers who devise the algorithms and train machines to use them. The majority of these professionals — almost 80 percent — are white males. “The systems they create may disadvantage anyone who is not a white male,” says Caliskan.

Diversifying the AI workforce is an important move in addressing AI bias, she says. “A diverse set of developers can start building AI systems that align with their own needs and values and test it based on their own lived experiences.”

Establishing ethical AI is a complex task, but Caliskan sees some big tech companies making first steps, reconsidering the much-debated idea of regulations and standards, opening AI systems for outside audits, hiring experts to investigate AI discrimination. Start-ups that offer tools to remove bias from AI systems are also popping up.

And more and more academics are joining forces to mitigate problems before they get out of control. Their role is critical in reining in AI bias, says Caliskan. “As researchers, we can inform policy so that the harmful side effects of these systems can at least potentially be slowed down.”

The work, she says, is urgent. “The bias life cycle is already so accelerated it is threatening our democracy, our cognition, our values, and our social processes.”