RAISE expo highlights students' solutions to AI problems

Story By Hallie Schwartz | Photos by Doug Parry Tuesday, October 21, 2025
Ph.D. student Kyra Wilson speaks about her research at a lectern.
Ph.D. student Kyra Wilson speaks about her research during the RAISE Fall Expo on the Amazon campus in Seattle.

Five Information School students were among dozens who shared insights from their research at the RAISE research center’s recent Fall Exposition in downtown Seattle. 

RAISE, the Center for Responsibility in AI Systems and Experiences, aims to advance artificial intelligence that serves the public good, emphasizing the importance of creating responsible and trustworthy AI. 

Hosted on the Amazon campus on Oct. 3, the exposition began with keynote speeches from Jeetu Mirchandani, director of Applied AI at Amazon, and Anat Caspi, principal scientist at the Paul G. Allen School of Computer Science and Engineering. 

Mirchandani highlighted the importance of building trust into AI and finding a balance between trust and efficiency. He stressed that maintaining this balance is key to developing responsible AI.

“Trust is harder to learn than efficiency. Efficiency is just money. You can experiment with it. You can't really experiment with trust, because once you've lost it, you can't get it back,” he said.

Mirchandani closed by reinforcing that trust must be built into AI from the beginning, reminding the attendees that responsibility starts with the product proposal, not after biases emerge.

Caspi spoke about the tensions surrounding the introduction of AI into civic tech, as it’s often implemented without consideration for its necessity or public acceptance. 

“We do need to address some of the fraught tensions that exist with AI, particularly as we move more and more of the technology into everyday life,” she said.

Ph.D. student Nicholas Clark speaks to a visitor about his research.
Ph.D. student Nicholas Clark speaks about his research.

She referenced some of her work with OS Connect, a Washington state-funded database of all sidewalks and pedestrian crossings. OS Connect aims to help pedestrians find accessible routes statewide. Caspi stressed that the integration of AI into the civic system is not just about infrastructure but about the people who use it and their access to opportunity. 

“We need to remember this because when we train on existing data, and we know that a lot of folks are already excluded from connection to opportunity – are we adequately representing what people are wanting to do, or what they're currently able to do?” she said.

After speeches wrapped up, attendees shared poster presentations offering a glimpse into their AI research. Among the researchers were Information School Ph.D. students Nicholas Clark, Bingbing Wen and Kyra Wilson and Informatics students Hoda Ayad and Ruth Nakigozi (pictured at top).

Clark’s poster shared his research, “Epistemic Alignment: A Mediating Framework for User-LLMs Knowledge Delivery.” 

He finds that large language models (LLMs) are becoming increasingly common in how people learn information, but lack knowledge of how people want to obtain information. 

“When your only mechanism to communicate your preferences for a language model is through natural language, or a prompt, it can get pretty annoying, especially when that's not persistent across sessions,” he said.

Clark and his research team hypothesize that it would be useful to have a more structured interface that would facilitate that process of communicating preferences.

Informatics student Hoda Ayad speaks with two visitors about her poster.
Informatics student Hoda Ayad speaks about her poster.

Ayad is also researching LLMs, specifically the human-like moral behaviors associated with LLMs when humans use them. 

“There are a lot of risks that have been studied that come with this anthropomorphic behavior including overreliance or overtrusting on the model,” she said.

In her research titled, “Proposed Noncompliance in LLM Moral Judgements,” Ayad argues that since LLM’s don’t have moral value systems, they should push back and not immediately answer when asked to make a value judgment.

“Even when the LLM has an opinion, it’s inconsistent. It has a value system of averages,” she said.

After the poster sessions, attendees were divided into groups, each assigned a question to spark discussion about responsible AI. 

Nakigozi shared her group’s response to a question about AI’s environmental impact: “How should we balance the benefits of expanding AI-powered resources for underserved communities against the environmental harms these systems may cause – especially when those harms disproportionately impact the same communities?”

She emphasized the importance of educating people about AI’s negative impacts on the environment. 

“We realized we are making [AI] very easy [to use], and people don't really understand how this technology works. They don't know the data centers are run by gigantic water systems, so we need to take on different ways to help educate people,” she said.

Her group discussed the potential of using solar or other renewable energy for the data centers that run AI technologies.

The event concluded with networking opportunities for the researchers, offering students the chance to connect with some of the AI specialists and researchers at Amazon.