Four Information School Ph.D. students were among the researchers who shared their work at the first RAISE Winter Exposition, a showcase for the group’s research on responsible artificial intelligence.
Hosted in the Zillow Commons at the Paul G. Allen Center for Computer Science and Engineering, the event highlighted original research, ongoing projects and published work, including that of Nicholas Clark, Preetam Damu, Anna-Maria Gueorguieva and Navreet Kaur. Responsibility in AI Systems and Experiences (RAISE) is an interdisciplinary center dedicated to research and education in the area of responsible AI.

“It started by some of us recognizing that AI is starting to make a huge impact in many ways and many aspects of our lives,” Co-Founding Director and iSchool Professor Chriag Shah (pictured at top) said at the exposition.
The Feb. 28 event began with a keynote speech from Ece Kamar, the vice president and lab director of AI Frontiers at Microsoft Research. Kamar spoke about AI agents as the next frontier in AI and emphasized the need to develop responsible AI models, especially as AI systems contribute to misinformation and raise concerns about privacy risks.
Following the keynote, attendees gathered for a poster session where RAISE affiliates showcased their research on responsible AI.
Clark presented on the “Epistemic Model Behavior” framework, which analyzes how large language models (LLMs) support the knowledge-seeking process. His research applies epistemology to AI systems to examine how users engage with AI-generated information.
“This framework emerged from my observation that users bring different knowledge expectations to AI interactions,” Clark said. “Some want definitive answers with citations, others prefer balanced exploration of competing viewpoints.”
Clark’s research focuses on epistemic responsibility, epistemic personalization and testimonial reliability – factors that led him to research challenges and trust with AI-generated responses.
“My research uncovered that major AI providers like OpenAI and Anthropic have yet to fully articulate their epistemological positions, creating uncertainty around design choices that now influence how millions of users seek and validate information,” Clark said.
Gueorguieva presented research examining how LLMs perceive stigmatized groups differently from humans and what biases may emerge in AI-generated outputs. Her work applies psychology literature on stigmatized groups to better understand how AI perceives marginalized communities.
“My research is motivated by identifying and mitigating harms that may be caused by the perpetuation of social biases in AI systems,” Gueorguieva said.
Her findings showed that LLMs struggle to conceptualize how stigma manifests, often producing biased responses when prompted.
However, she found that bias was less pronounced against individuals of certain racial, gender or religious identities compared to other stigmatized groups. Previous work with LLMs has primarily focused on sociodemographic groups, leaving less attention on other marginalized identities such as being HIV-positive, experiencing homelessness or having a disability.
“This finding suggests that bias mitigation techniques for LLM outputs may be working, as most bias mitigation has focused on sociodemographic stigmas,” Gueorguieva said.

Ph.D student Navreet Kaur presented research on inaccurate and insensitive responses from conversational AI agents, particularly how these agents respond to questions on topics like addiction. Her study examined how conversational AI agents respond to different users, including patients, caregivers, health-care practitioners and researchers.
“These personas represent diverse needs, behaviors and experiences, helping us evaluate how well LLMs align with the information needs of users seeking support on stigmatized topics,” Kaur said.
The preliminary results show that while LLM responses are empathetic, they lack quality and bring into question the adequacy of information for users with less knowledge.
“This highlights the need to balance empathetic language with relevant and accessible information.,” Kaur said. “We hope that our findings can guide model developers in designing appropriate models for stigmatized contexts and inform users about the risks and benefits of using conversational AI agents to seek information and support on stigmatized topics.”
Other presenters at the event included CSE students, outside researchers and even a high school student. The event concluded with small group discussions and networking opportunities, allowing researchers to exchange ideas on the future of responsible AI.